Search is not available for this dataset
Models
The base classes [PreTrainedModel], [TFPreTrainedModel], and
[FlaxPreTrainedModel] implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
S3 repository).
[PreTrainedModel] and [TFPreTrainedModel] also implement a few methods which
are common among all the models to:
resize the input token embeddings when new tokens are added to the vocabulary
prune the attention heads of the model.
The other methods that are common to each model are defined in [~modeling_utils.ModuleUtilsMixin]
(for the PyTorch models) and [~modeling_tf_utils.TFModuleUtilsMixin] (for the TensorFlow models) or
for text generation, [~generation.GenerationMixin] (for the PyTorch models),
[~generation.TFGenerationMixin] (for the TensorFlow models) and
[~generation.FlaxGenerationMixin] (for the Flax/JAX models).
PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
Large model loading
In Transformers 4.20.0, the [~PreTrainedModel.from_pretrained] method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded.
This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only.
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.
When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don't need to specify it:
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
You can inspect how the model was split across devices by looking at its hf_device_map attribute:
py
t0pp.hf_device_map
python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory):
python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below.
Model Instantiation dtype
Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
either explicitly pass the desired dtype using torch_dtype argument:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto",
and then dtype will be automatically derived from the model's weights:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
Models instantiated from scratch can also be told which dtype to use with:
python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
Due to Pytorch design, this functionality is only available for floating dtypes.
ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint
stringlengths 161
226k
⌀ |
---|
FNet
Overview
The FNet model was proposed in FNet: Mixing Tokens with Fourier Transforms by
James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. The model replaces the self-attention layer in a BERT
model with a fourier transform which returns only the real parts of the transform. The model is significantly faster
than the BERT model because it has fewer parameters and is more memory efficient. The model achieves about 92-97%
accuracy of BERT counterparts on GLUE benchmark, and trains much faster than the BERT model. The abstract from the
paper is the following:
We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the
self-attention sublayers with simple linear transformations that “mix” input tokens. These linear mixers, along with
standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text
classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder
with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE
benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths,
our FNet model is significantly faster: when compared to the “efficient” Transformers on the Long Range Arena
benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all
sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint
and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models
outperform Transformer counterparts.
Tips on usage:
The model was trained without an attention mask as it is based on Fourier Transform. The model was trained with
maximum sequence length 512 which includes pad tokens. Hence, it is highly recommended to use the same maximum
sequence length for fine-tuning and inference.
This model was contributed by gchhablani. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FNetConfig
class transformers.FNetConfig
<
source
>
(
vocab_size = 32000
hidden_size = 768
num_hidden_layers = 12
intermediate_size = 3072
hidden_act = 'gelu_new'
hidden_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 4
initializer_range = 0.02
layer_norm_eps = 1e-12
use_tpu_fourier_optimizations = False
tpu_short_seq_length = 512
pad_token_id = 3
bos_token_id = 1
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32000) —
Vocabulary size of the FNet model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling FNetModel or TFFNetModel.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu_new") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 4) —
The vocabulary size of the token_type_ids passed when calling FNetModel or TFFNetModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
use_tpu_fourier_optimizations (bool, optional, defaults to False) —
Determines whether to use TPU optimized FFTs. If True, the model will favor axis-wise FFTs transforms.
Set to False for GPU/CPU hardware, in which case n-dimensional FFTs are used.
tpu_short_seq_length (int, optional, defaults to 512) —
The sequence length that is expected by the model when using TPUs. This will be used to initialize the DFT
matrix only when use_tpu_fourier_optimizations is set to True and the input sequence is shorter than or
equal to 4096 tokens.
This is the configuration class to store the configuration of a FNetModel. It is used to instantiate an FNet
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the FNet
google/fnet-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import FNetConfig, FNetModel
# Initializing a FNet fnet-base style configuration
configuration = FNetConfig()
# Initializing a model (with random weights) from the fnet-base style configuration
model = FNetModel(configuration)
# Accessing the model configuration
configuration = model.config
FNetTokenizer
class transformers.FNetTokenizer
<
source
>
(
vocab_file
do_lower_case = False
remove_space = True
keep_accents = True
unk_token = '<unk>'
sep_token = '[SEP]'
pad_token = '<pad>'
cls_token = '[CLS]'
mask_token = '[MASK]'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to False) —
Whether or not to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) —
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to True) —
Whether or not to keep accents when tokenizing.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct an FNet tokenizer. Adapted from AlbertTokenizer. Based on
SentencePiece. This tokenizer inherits from PreTrainedTokenizer
which contains most of the main methods. Users should refer to this superclass for more information regarding those
methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An FNet sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An FNet sequence
pair mask has the following format: :
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
FNetTokenizerFast
class transformers.FNetTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = False
remove_space = True
keep_accents = True
unk_token = '<unk>'
sep_token = '[SEP]'
pad_token = '<pad>'
cls_token = '[CLS]'
mask_token = '[MASK]'
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to False) —
Whether or not to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) —
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to True) —
Whether or not to keep accents when tokenizing.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
Construct a “fast” FNetTokenizer (backed by HuggingFace’s tokenizers library). Adapted from
AlbertTokenizerFast. Based on
Unigram. This
tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An FNet sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An FNet
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
if token_ids_1 is None, only returns the first portion of the mask (0s).
FNetModel
class transformers.FNetModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (FNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare FNet Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
The model can behave as an encoder, following the architecture described in FNet: Mixing Tokens with Fourier
Transforms by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FNetModel
import torch
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetModel.from_pretrained("google/fnet-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FNetForPreTraining
class transformers.FNetForPreTraining
<
source
>
(
config
)
Parameters
config (FNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FNet Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
next_sentence_label: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.fnet.modeling_fnet.FNetForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.models.fnet.modeling_fnet.FNetForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.fnet.modeling_fnet.FNetForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FNetConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
The FNetForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FNetForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetForPreTraining.from_pretrained("google/fnet-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
FNetForMaskedLM
class transformers.FNetForMaskedLM
<
source
>
(
config
)
Parameters
config (FNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FNet Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FNetForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FNetForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetForMaskedLM.from_pretrained("google/fnet-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
FNetForNextSentencePrediction
class transformers.FNetForNextSentencePrediction
<
source
>
(
config
)
Parameters
config (FNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FNet Model with a next sentence prediction (classification) head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring). Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
Returns
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss.
logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FNetForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FNetForNextSentencePrediction
import torch
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetForNextSentencePrediction.from_pretrained("google/fnet-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="pt")
outputs = model(**encoding, labels=torch.LongTensor([1]))
logits = outputs.logits
assert logits[0, 0] < logits[0, 1] # next sentence was random
FNetForSequenceClassification
class transformers.FNetForSequenceClassification
<
source
>
(
config
)
Parameters
config (FNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FNet Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FNetForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, FNetForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetForSequenceClassification.from_pretrained("google/fnet-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = FNetForSequenceClassification.from_pretrained("google/fnet-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, FNetForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetForSequenceClassification.from_pretrained("google/fnet-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = FNetForSequenceClassification.from_pretrained(
... "google/fnet-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
FNetForMultipleChoice
class transformers.FNetForMultipleChoice
<
source
>
(
config
)
Parameters
config (FNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FNetForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FNetForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetForMultipleChoice.from_pretrained("google/fnet-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
FNetForTokenClassification
class transformers.FNetForTokenClassification
<
source
>
(
config
)
Parameters
config (FNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FNetForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FNetForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetForTokenClassification.from_pretrained("google/fnet-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
FNetForQuestionAnswering
class transformers.FNetForQuestionAnswering
<
source
>
(
config
)
Parameters
config (FNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FNetForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FNetForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
model = FNetForQuestionAnswering.from_pretrained("google/fnet-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←FlauBERT
FSMT→
FNet
Overview
Documentation resources
FNetConfig
FNetTokenizer
FNetTokenizerFast
FNetModel
FNetForPreTraining
FNetForMaskedLM
FNetForNextSentencePrediction
FNetForSequenceClassification
FNetForMultipleChoice
FNetForTokenClassification
FNetForQuestionAnswering
|
ByT5
Overview
The ByT5 model was presented in ByT5: Towards a token-free future with pre-trained byte-to-byte models by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir
Kale, Adam Roberts, Colin Raffel.
The abstract from the paper is the following:
Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units.
Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from
the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they
can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by
removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token
sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of
operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with
minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count,
training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level
counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on
tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of
pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our
experiments.
This model was contributed by patrickvonplaten. The original code can be
found here.
ByT5’s architecture is based on the T5v1.1 model, so one can refer to T5v1.1’s documentation page. They
only differ in how inputs should be prepared for the model, see the code examples below.
Since ByT5 was pre-trained unsupervisedly, there’s no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Example
ByT5 works on raw UTF-8 bytes, so it can be used without a tokenizer:
Copied
from transformers import T5ForConditionalGeneration
import torch
model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
num_special_tokens = 3
# Model has 3 special tokens which take up the input ids 0,1,2 of ByT5.
# => Need to shift utf-8 character encodings by 3 before passing ids to model.
input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens
labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens
loss = model(input_ids, labels=labels).loss
loss.item()
2.66
For batched inference and training it is however recommended to make use of the tokenizer:
Copied
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained("google/byt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/byt5-small")
model_inputs = tokenizer(
... ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt"
... )
labels_dict = tokenizer(
... ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt"
... )
labels = labels_dict.input_ids
loss = model(**model_inputs, labels=labels).loss
loss.item()
17.9
Similar to T5, ByT5 was trained on the span-mask denoising task. However,
since the model works directly on characters, the pretraining task is a bit
different. Let’s corrupt some characters of the
input sentence "The dog chases a ball in the park." and ask ByT5 to predict them
for us.
Copied
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/byt5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base")
input_ids_prompt = "The dog chases a ball in the park."
input_ids = tokenizer(input_ids_prompt).input_ids
# Note that we cannot add "{extra_id_...}" to the string directly
# as the Byte tokenizer would incorrectly merge the tokens
# For ByT5, we need to work directly on the character level
# Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead
# uses final utf character ids.
# UTF-8 is represented by 8 bits and ByT5 has 3 special tokens.
# => There are 2**8+2 = 259 input ids and mask tokens count down from index 258.
# => mask to "The dog [258]a ball [257]park."
input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]])
input_ids
tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]])
# ByT5 produces only one char at a time so we need to produce many more output characters here -> set `max_length=100`.
output_ids = model.generate(input_ids, max_length=100)[0].tolist()
output_ids
[0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49]
# ^- Note how 258 descends to 257, 256, 255
# Now we need to split on the sentinel tokens, let's write a short loop for this
output_ids_list = []
start_token = 0
sentinel_token = 258
while sentinel_token in output_ids:
... split_idx = output_ids.index(sentinel_token)
... output_ids_list.append(output_ids[start_token:split_idx])
... start_token = split_idx
... sentinel_token -= 1
output_ids_list.append(output_ids[start_token:])
output_string = tokenizer.batch_decode(output_ids_list)
output_string
['<pad>', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.']
ByT5Tokenizer
class transformers.ByT5Tokenizer
<
source
>
(
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
extra_ids = 125
additional_special_tokens = None
**kwargs
)
Parameters
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
extra_ids (int, optional, defaults to 100) —
Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are
accessible as “id{%d}>” where ”{%d}” is a number between 0 and extra_ids-1. Extra tokens are
indexed from the end of the vocabulary up to beginning (“” is the last token in the vocabulary
like in ByT5 preprocessing see
here).
additional_special_tokens (List[str], optional) —
Additional special tokens used by the tokenizer.
Construct a ByT5 tokenizer. ByT5 simply uses raw bytes utf-8 encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
single sequence: X </s>
pair of sequences: A </s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. ByT5 does not
make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
See ByT5Tokenizer for all details.
←BORT
CamemBERT→
ByT5
Overview
Example
ByT5Tokenizer
|
Deformable DETR
Overview
The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original DETR by leveraging a new deformable attention module which only attends to a small set of key sampling points around a reference.
The abstract from the paper is the following:
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach.
Tips:
One can use DeformableDetrImageProcessor to prepare images (and optional targets) for the model.
Training Deformable DETR is equivalent to training the original DETR model. See the resources section below for demo notebooks.
Deformable DETR architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Deformable DETR.
Object Detection
Demo notebooks regarding inference + fine-tuning on a custom dataset for DeformableDetrForObjectDetection can be found here.
See also: Object detection task guide.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DeformableDetrImageProcessor
class transformers.DeformableDetrImageProcessor
<
source
>
(
format: typing.Union[str, transformers.models.deformable_detr.image_processing_deformable_detr.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'>
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float]] = None
image_std: typing.Union[float, typing.List[float]] = None
do_pad: bool = True
**kwargs
)
Parameters
format (str, optional, defaults to "coco_detection") —
Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
do_resize (bool, optional, defaults to True) —
Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be
overridden by the do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}):
Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in
the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image.
do_rescale (bool, optional, defaults to True) —
Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize —
Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the
preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) —
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) —
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the image_std parameter in the preprocess method.
do_pad (bool, optional, defaults to True) —
Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be
overridden by the do_pad parameter in the preprocess method.
Constructs a Deformable DETR image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
annotations: typing.Union[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]], typing.List[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]]], NoneType] = None
return_segmentation_masks: bool = None
masks_path: typing.Union[str, pathlib.Path, NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Union[int, float, NoneType] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: typing.Optional[bool] = None
format: typing.Union[str, transformers.models.deformable_detr.image_processing_deformable_detr.AnnotionFormat, NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image or batch of images to preprocess.
annotations (AnnotationType or List[AnnotationType], optional) —
List of annotations associated with the image or batch of images. If annotation is for object
detection, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“annotations” (List[Dict]): List of annotations for an image. Each annotation should be a
dictionary. An image can have no annotations, in which case the list should be empty.
If annotation is for segmentation, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary.
An image can have no segments, in which case the list should be empty.
“file_name” (str): The file name of the image.
return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) —
Whether to return segmentation masks.
masks_path (str or pathlib.Path, optional) —
Path to the directory containing the segmentation masks.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use when resizing the image.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to use when rescaling the image.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Mean to use when normalizing the image.
image_std (float or List[float], optional, defaults to self.image_std) —
Standard deviation to use when normalizing the image.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image.
format (str or AnnotionFormat, optional, defaults to self.format) —
Format of the annotations.
return_tensors (str or TensorType, optional, defaults to self.return_tensors) —
Type of tensors to return. If None, will return the list of images.
data_format (str or ChannelDimension, optional, defaults to self.data_format) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Preprocess an image or a batch of images so that it can be used by the model.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
top_k: int = 100
)
→
List[Dict]
Parameters
outputs (DetrObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If left to None, predictions will not be resized.
top_k (int, optional, defaults to 100) —
Keep only top k bounding boxes before filtering by thresholding.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of DeformableDetrForObjectDetection into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
DeformableDetrFeatureExtractor
class transformers.DeformableDetrFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
top_k: int = 100
)
→
List[Dict]
Parameters
outputs (DetrObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If left to None, predictions will not be resized.
top_k (int, optional, defaults to 100) —
Keep only top k bounding boxes before filtering by thresholding.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of DeformableDetrForObjectDetection into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
DeformableDetrConfig
class transformers.DeformableDetrConfig
<
source
>
(
use_timm_backbone = True
backbone_config = None
num_channels = 3
num_queries = 300
max_position_embeddings = 1024
encoder_layers = 6
encoder_ffn_dim = 1024
encoder_attention_heads = 8
decoder_layers = 6
decoder_ffn_dim = 1024
decoder_attention_heads = 8
encoder_layerdrop = 0.0
is_encoder_decoder = True
activation_function = 'relu'
d_model = 256
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
init_xavier_std = 1.0
return_intermediate = True
auxiliary_loss = False
position_embedding_type = 'sine'
backbone = 'resnet50'
use_pretrained_backbone = True
dilation = False
num_feature_levels = 4
encoder_n_points = 4
decoder_n_points = 4
two_stage = False
two_stage_num_proposals = 300
with_box_refine = False
class_cost = 1
bbox_cost = 5
giou_cost = 2
mask_loss_coefficient = 1
dice_loss_coefficient = 1
bbox_loss_coefficient = 5
giou_loss_coefficient = 2
eos_coefficient = 0.1
focal_alpha = 0.25
disable_custom_kernels = False
**kwargs
)
Parameters
use_timm_backbone (bool, optional, defaults to True) —
Whether or not to use the timm library for the backbone. If set to False, will use the AutoBackbone
API.
backbone_config (PretrainedConfig or dict, optional) —
The configuration of the backbone model. Only used in case use_timm_backbone is set to False in which
case it will default to ResNetConfig().
num_channels (int, optional, defaults to 3) —
The number of input channels.
num_queries (int, optional, defaults to 300) —
Number of object queries, i.e. detection slots. This is the maximal number of objects
DeformableDetrModel can detect in a single image. In case two_stage is set to True, we use
two_stage_num_proposals instead.
d_model (int, optional, defaults to 256) —
Dimension of the layers.
encoder_layers (int, optional, defaults to 6) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 6) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 1024) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 1024) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "relu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (float, optional, defaults to 1) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (bool, optional, defaults to False) —
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (str, optional, defaults to "sine") —
Type of position embeddings to be used on top of the image features. One of "sine" or "learned".
backbone (str, optional, defaults to "resnet50") —
Name of convolutional backbone to use in case use_timm_backbone = True. Supports any convolutional
backbone from the timm package. For a list of all available models, see this
page.
use_pretrained_backbone (bool, optional, defaults to True) —
Whether to use pretrained weights for the backbone. Only supported when use_timm_backbone = True.
dilation (bool, optional, defaults to False) —
Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when
use_timm_backbone = True.
class_cost (float, optional, defaults to 1) —
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (float, optional, defaults to 5) —
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (float, optional, defaults to 5) —
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (float, optional, defaults to 0.1) —
Relative classification weight of the ‘no-object’ class in the object detection loss.
num_feature_levels (int, optional, defaults to 4) —
The number of input feature levels.
encoder_n_points (int, optional, defaults to 4) —
The number of sampled keys in each feature level for each attention head in the encoder.
decoder_n_points (int, optional, defaults to 4) —
The number of sampled keys in each feature level for each attention head in the decoder.
two_stage (bool, optional, defaults to False) —
Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of
Deformable DETR, which are further fed into the decoder for iterative bounding box refinement.
two_stage_num_proposals (int, optional, defaults to 300) —
The number of region proposals to be generated, in case two_stage is set to True.
with_box_refine (bool, optional, defaults to False) —
Whether to apply iterative bounding box refinement, where each decoder layer refines the bounding boxes
based on the predictions from the previous layer.
focal_alpha (float, optional, defaults to 0.25) —
Alpha parameter in the focal loss.
disable_custom_kernels (bool, optional, defaults to False) —
Disable the use of custom CUDA and CPU kernels. This option is necessary for the ONNX export, as custom
kernels are not supported by PyTorch ONNX export.
This is the configuration class to store the configuration of a DeformableDetrModel. It is used to instantiate
a Deformable DETR model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Deformable DETR
SenseTime/deformable-detr architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import DeformableDetrConfig, DeformableDetrModel
# Initializing a Deformable DETR SenseTime/deformable-detr style configuration
configuration = DeformableDetrConfig()
# Initializing a model (with random weights) from the SenseTime/deformable-detr style configuration
model = DeformableDetrModel(configuration)
# Accessing the model configuration
configuration = model.config
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
DeformableDetrModel
class transformers.DeformableDetrModel
<
source
>
(
config: DeformableDetrConfig
)
Parameters
config (DeformableDetrConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Deformable DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw
hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See DeformableDetrImageProcessor.call()
for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrModelOutput or tuple(torch.FloatTensor)
A transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DeformableDetrConfig) and inputs.
init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) — Stacked intermediate hidden states (output of each layer of the decoder).
intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder).
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer
plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are
picked as region proposals in the first stage. Output of bounding box binary classification (i.e.
foreground and background).
enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage.
The DeformableDetrModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, DeformableDetrModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
model = DeformableDetrModel.from_pretrained("SenseTime/deformable-detr")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 300, 256]
DeformableDetrForObjectDetection
class transformers.DeformableDetrForObjectDetection
<
source
>
(
config: DeformableDetrConfig
)
Parameters
config (DeformableDetrConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
Deformable DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on
top, for tasks such as COCO detection.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See DeformableDetrImageProcessor.call()
for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (List[Dict] of len (batch_size,), optional) —
Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the
following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch
respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).
Returns
transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrObjectDetectionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DeformableDetrConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use ~DeformableDetrProcessor.post_process_object_detection to retrieve the
unnormalized bounding boxes.
auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer
plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_heads, 4, 4). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average
in the self-attention heads.
intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) — Stacked intermediate hidden states (output of each layer of the decoder).
intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder).
init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder.
enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are
picked as region proposals in the first stage. Output of bounding box binary classification (i.e.
foreground and background).
enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage.
The DeformableDetrForObjectDetection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, DeformableDetrForObjectDetection
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr")
model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
target_sizes = torch.tensor([image.size[::-1]])
results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[
... 0
... ]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected cat with confidence 0.8 at location [16.5, 52.84, 318.25, 470.78]
Detected cat with confidence 0.789 at location [342.19, 24.3, 640.02, 372.25]
Detected remote with confidence 0.633 at location [40.79, 72.78, 176.76, 117.25]
←CvT
DeiT→
Deformable DETR
Overview
Resources
DeformableDetrImageProcessor
DeformableDetrFeatureExtractor
DeformableDetrConfig
DeformableDetrModel
DeformableDetrForObjectDetection
|
Conditional DETR
Overview
The Conditional DETR model was proposed in Conditional DETR for Fast Training Convergence by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. Conditional DETR converges 6.7× to 10× faster than DETR.
The abstract from the paper is the following:
The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR.
Conditional DETR shows much faster convergence compared to the original DETR. Taken from the original paper.
This model was contributed by DepuMeng. The original code can be found here.
Documentation resources
Object detection task guide
ConditionalDetrConfig
class transformers.ConditionalDetrConfig
<
source
>
(
use_timm_backbone = True
backbone_config = None
num_channels = 3
num_queries = 300
encoder_layers = 6
encoder_ffn_dim = 2048
encoder_attention_heads = 8
decoder_layers = 6
decoder_ffn_dim = 2048
decoder_attention_heads = 8
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
is_encoder_decoder = True
activation_function = 'relu'
d_model = 256
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
init_xavier_std = 1.0
auxiliary_loss = False
position_embedding_type = 'sine'
backbone = 'resnet50'
use_pretrained_backbone = True
dilation = False
class_cost = 2
bbox_cost = 5
giou_cost = 2
mask_loss_coefficient = 1
dice_loss_coefficient = 1
cls_loss_coefficient = 2
bbox_loss_coefficient = 5
giou_loss_coefficient = 2
focal_alpha = 0.25
**kwargs
)
Parameters
use_timm_backbone (bool, optional, defaults to True) —
Whether or not to use the timm library for the backbone. If set to False, will use the AutoBackbone
API.
backbone_config (PretrainedConfig or dict, optional) —
The configuration of the backbone model. Only used in case use_timm_backbone is set to False in which
case it will default to ResNetConfig().
num_channels (int, optional, defaults to 3) —
The number of input channels.
num_queries (int, optional, defaults to 100) —
Number of object queries, i.e. detection slots. This is the maximal number of objects
ConditionalDetrModel can detect in a single image. For COCO, we recommend 100 queries.
d_model (int, optional, defaults to 256) —
Dimension of the layers.
encoder_layers (int, optional, defaults to 6) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 6) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "relu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (float, optional, defaults to 1) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (bool, optional, defaults to False) —
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (str, optional, defaults to "sine") —
Type of position embeddings to be used on top of the image features. One of "sine" or "learned".
backbone (str, optional, defaults to "resnet50") —
Name of convolutional backbone to use in case use_timm_backbone = True. Supports any convolutional
backbone from the timm package. For a list of all available models, see this
page.
use_pretrained_backbone (bool, optional, defaults to True) —
Whether to use pretrained weights for the backbone. Only supported when use_timm_backbone = True.
dilation (bool, optional, defaults to False) —
Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when
use_timm_backbone = True.
class_cost (float, optional, defaults to 1) —
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (float, optional, defaults to 5) —
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (float, optional, defaults to 5) —
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (float, optional, defaults to 0.1) —
Relative classification weight of the ‘no-object’ class in the object detection loss.
focal_alpha (float, optional, defaults to 0.25) —
Alpha parameter in the focal loss.
This is the configuration class to store the configuration of a ConditionalDetrModel. It is used to instantiate
a Conditional DETR model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Conditional DETR
microsoft/conditional-detr-resnet-50 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import ConditionalDetrConfig, ConditionalDetrModel
# Initializing a Conditional DETR microsoft/conditional-detr-resnet-50 style configuration
configuration = ConditionalDetrConfig()
# Initializing a model (with random weights) from the microsoft/conditional-detr-resnet-50 style configuration
model = ConditionalDetrModel(configuration)
# Accessing the model configuration
configuration = model.config
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
ConditionalDetrImageProcessor
class transformers.ConditionalDetrImageProcessor
<
source
>
(
format: typing.Union[str, transformers.models.conditional_detr.image_processing_conditional_detr.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'>
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float]] = None
image_std: typing.Union[float, typing.List[float]] = None
do_pad: bool = True
**kwargs
)
Parameters
format (str, optional, defaults to "coco_detection") —
Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
do_resize (bool, optional, defaults to True) —
Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be
overridden by the do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}):
Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in
the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image.
do_rescale (bool, optional, defaults to True) —
Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize —
Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the
preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) —
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) —
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the image_std parameter in the preprocess method.
do_pad (bool, optional, defaults to True) —
Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be
overridden by the do_pad parameter in the preprocess method.
Constructs a Conditional Detr image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
annotations: typing.Union[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]], typing.List[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]]], NoneType] = None
return_segmentation_masks: bool = None
masks_path: typing.Union[str, pathlib.Path, NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Union[int, float, NoneType] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: typing.Optional[bool] = None
format: typing.Union[str, transformers.models.conditional_detr.image_processing_conditional_detr.AnnotionFormat, NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image or batch of images to preprocess.
annotations (AnnotationType or List[AnnotationType], optional) —
List of annotations associated with the image or batch of images. If annotation is for object
detection, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“annotations” (List[Dict]): List of annotations for an image. Each annotation should be a
dictionary. An image can have no annotations, in which case the list should be empty.
If annotation is for segmentation, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary.
An image can have no segments, in which case the list should be empty.
“file_name” (str): The file name of the image.
return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) —
Whether to return segmentation masks.
masks_path (str or pathlib.Path, optional) —
Path to the directory containing the segmentation masks.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use when resizing the image.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to use when rescaling the image.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Mean to use when normalizing the image.
image_std (float or List[float], optional, defaults to self.image_std) —
Standard deviation to use when normalizing the image.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image.
format (str or AnnotionFormat, optional, defaults to self.format) —
Format of the annotations.
return_tensors (str or TensorType, optional, defaults to self.return_tensors) —
Type of tensors to return. If None, will return the list of images.
data_format (str or ChannelDimension, optional, defaults to self.data_format) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Preprocess an image or a batch of images so that it can be used by the model.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
top_k: int = 100
)
→
List[Dict]
Parameters
outputs (DetrObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If left to None, predictions will not be resized.
top_k (int, optional, defaults to 100) —
Keep only top k bounding boxes before filtering by thresholding.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of ConditionalDetrForObjectDetection into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
post_process_instance_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
outputs (ConditionalDetrForSegmentation) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If unset, predictions will not be resized.
return_coco_annotation (bool, optional) —
Defaults to False. If set to True, segmentation maps are returned in COCO run-length encoding (RLE)
format.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or
List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True. Set to None if no mask if found above threshold.
segments_info — A dictionary that contains additional information on each segment.
id — An integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
score — Prediction score of segment with segment_id.
Converts the output of ConditionalDetrForSegmentation into instance segmentation predictions. Only supports
PyTorch.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple[int, int]] = None
)
→
List[torch.Tensor]
Parameters
outputs (ConditionalDetrForSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple[int, int]], optional) —
A list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the
batch. If unset, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of ConditionalDetrForSegmentation into semantic segmentation maps. Only supports
PyTorch.
post_process_panoptic_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
outputs (ConditionalDetrForSegmentation) —
The outputs from ConditionalDetrForSegmentation.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int], optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id or
None if no mask if found above threshold. If target_sizes is specified, segmentation is resized to
the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of ConditionalDetrForSegmentation into image panoptic segmentation predictions. Only
supports PyTorch.
ConditionalDetrFeatureExtractor
class transformers.ConditionalDetrFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
top_k: int = 100
)
→
List[Dict]
Parameters
outputs (DetrObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If left to None, predictions will not be resized.
top_k (int, optional, defaults to 100) —
Keep only top k bounding boxes before filtering by thresholding.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of ConditionalDetrForObjectDetection into final bounding boxes in (top_left_x,
top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.
post_process_instance_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
outputs (ConditionalDetrForSegmentation) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If unset, predictions will not be resized.
return_coco_annotation (bool, optional) —
Defaults to False. If set to True, segmentation maps are returned in COCO run-length encoding (RLE)
format.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or
List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True. Set to None if no mask if found above threshold.
segments_info — A dictionary that contains additional information on each segment.
id — An integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
score — Prediction score of segment with segment_id.
Converts the output of ConditionalDetrForSegmentation into instance segmentation predictions. Only supports
PyTorch.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple[int, int]] = None
)
→
List[torch.Tensor]
Parameters
outputs (ConditionalDetrForSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple[int, int]], optional) —
A list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the
batch. If unset, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of ConditionalDetrForSegmentation into semantic segmentation maps. Only supports
PyTorch.
post_process_panoptic_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
outputs (ConditionalDetrForSegmentation) —
The outputs from ConditionalDetrForSegmentation.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int], optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id or
None if no mask if found above threshold. If target_sizes is specified, segmentation is resized to
the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of ConditionalDetrForSegmentation into image panoptic segmentation predictions. Only
supports PyTorch.
ConditionalDetrModel
class transformers.ConditionalDetrModel
<
source
>
(
config: ConditionalDetrConfig
)
Parameters
config (ConditionalDetrConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Conditional DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw
hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See ConditionalDetrImageProcessor.call()
for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrModelOutput or tuple(torch.FloatTensor)
A transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConditionalDetrConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
intermediate_hidden_states (torch.FloatTensor of shape (config.decoder_layers, batch_size, sequence_length, hidden_size), optional, returned when config.auxiliary_loss=True) — Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.
The ConditionalDetrModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50")
model = AutoModel.from_pretrained("microsoft/conditional-detr-resnet-50")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# the last hidden states are the final query embeddings of the Transformer decoder
# these are of shape (batch_size, num_queries, hidden_size)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 300, 256]
ConditionalDetrForObjectDetection
class transformers.ConditionalDetrForObjectDetection
<
source
>
(
config: ConditionalDetrConfig
)
Parameters
config (ConditionalDetrConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
CONDITIONAL_DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on
top, for tasks such as COCO detection.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See ConditionalDetrImageProcessor.call()
for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (List[Dict] of len (batch_size,), optional) —
Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the
following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch
respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).
Returns
transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrObjectDetectionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConditionalDetrConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve
the unnormalized bounding boxes.
auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
The ConditionalDetrForObjectDetection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, AutoModelForObjectDetection
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50")
model = AutoModelForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
target_sizes = torch.tensor([image.size[::-1]])
results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[
... 0
... ]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected remote with confidence 0.833 at location [38.31, 72.1, 177.63, 118.45]
Detected cat with confidence 0.831 at location [9.2, 51.38, 321.13, 469.0]
Detected cat with confidence 0.804 at location [340.3, 16.85, 642.93, 370.95]
Detected remote with confidence 0.683 at location [334.48, 73.49, 366.37, 190.01]
Detected couch with confidence 0.535 at location [0.52, 1.19, 640.35, 475.1]
ConditionalDetrForSegmentation
class transformers.ConditionalDetrForSegmentation
<
source
>
(
config: ConditionalDetrConfig
)
Parameters
config (ConditionalDetrConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
CONDITIONAL_DETR Model (consisting of a backbone and encoder-decoder Transformer) with a segmentation head on top,
for tasks such as COCO panoptic.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrSegmentationOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See ConditionalDetrImageProcessor.call()
for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (List[Dict] of len (batch_size,), optional) —
Labels for computing the bipartite matching loss, DICE/F-1 loss and Focal loss. List of dicts, each
dictionary containing at least the following 3 keys: ‘class_labels’, ‘boxes’ and ‘masks’ (the class labels,
bounding boxes and segmentation masks of an image in the batch respectively). The class labels themselves
should be a torch.LongTensor of len (number of bounding boxes in the image,), the boxes a
torch.FloatTensor of shape (number of bounding boxes in the image, 4) and the masks a
torch.FloatTensor of shape (number of bounding boxes in the image, height, width).
Returns
transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrSegmentationOutput or tuple(torch.FloatTensor)
A transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrSegmentationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConditionalDetrConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve
the unnormalized bounding boxes.
pred_masks (torch.FloatTensor of shape (batch_size, num_queries, height/4, width/4)) — Segmentation masks logits for all queries. See also
post_process_semantic_segmentation() or
post_process_instance_segmentation()
post_process_panoptic_segmentation() to evaluate semantic, instance and
panoptic segmentation masks respectively.
auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
The ConditionalDetrForSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import io
import requests
from PIL import Image
import torch
import numpy
from transformers import (
... AutoImageProcessor,
... ConditionalDetrConfig,
... ConditionalDetrForSegmentation,
... )
from transformers.image_transforms import rgb_to_id
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50")
# randomly initialize all weights of the model
config = ConditionalDetrConfig()
model = ConditionalDetrForSegmentation(config)
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# Use the `post_process_panoptic_segmentation` method of the `image_processor` to retrieve post-processed panoptic segmentation maps
# Segmentation results are returned as a list of dictionaries
result = image_processor.post_process_panoptic_segmentation(outputs, target_sizes=[(300, 500)])
# A tensor of shape (height, width) where each value denotes a segment id, filled with -1 if no segment is found
panoptic_seg = result[0]["segmentation"]
# Get prediction score and segment_id to class_id mapping of each segment
panoptic_segments_info = result[0]["segments_info"]
←BiT
ConvNeXT→
Conditional DETR
Overview
Documentation resources
ConditionalDetrConfig
ConditionalDetrImageProcessor
ConditionalDetrFeatureExtractor
ConditionalDetrModel
ConditionalDetrForObjectDetection
ConditionalDetrForSegmentation
|
EfficientNet
Overview
The EfficientNet model was proposed in EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models.
The abstract from the paper is the following:
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet.
To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters.
This model was contributed by adirik.
The original code can be found here.
EfficientNetConfig
class transformers.EfficientNetConfig
<
source
>
(
num_channels: int = 3
image_size: int = 600
width_coefficient: float = 2.0
depth_coefficient: float = 3.1
depth_divisor: int = 8
kernel_sizes: typing.List[int] = [3, 3, 5, 3, 5, 5, 3]
in_channels: typing.List[int] = [32, 16, 24, 40, 80, 112, 192]
out_channels: typing.List[int] = [16, 24, 40, 80, 112, 192, 320]
depthwise_padding: typing.List[int] = []
strides: typing.List[int] = [1, 2, 2, 2, 1, 2, 1]
num_block_repeats: typing.List[int] = [1, 2, 2, 3, 3, 4, 1]
expand_ratios: typing.List[int] = [1, 6, 6, 6, 6, 6, 6]
squeeze_expansion_ratio: float = 0.25
hidden_act: str = 'swish'
hidden_dim: int = 2560
pooling_type: str = 'mean'
initializer_range: float = 0.02
batch_norm_eps: float = 0.001
batch_norm_momentum: float = 0.99
dropout_rate: float = 0.5
drop_connect_rate: float = 0.2
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
image_size (int, optional, defaults to 600) —
The input image size.
width_coefficient (float, optional, defaults to 2.0) —
Scaling coefficient for network width at each stage.
depth_coefficient (float, optional, defaults to 3.1) —
Scaling coefficient for network depth at each stage.
depth_divisor int, optional, defaults to 8) —
A unit of network width.
kernel_sizes (List[int], optional, defaults to [3, 3, 5, 3, 5, 5, 3]) —
List of kernel sizes to be used in each block.
in_channels (List[int], optional, defaults to [32, 16, 24, 40, 80, 112, 192]) —
List of input channel sizes to be used in each block for convolutional layers.
out_channels (List[int], optional, defaults to [16, 24, 40, 80, 112, 192, 320]) —
List of output channel sizes to be used in each block for convolutional layers.
depthwise_padding (List[int], optional, defaults to []) —
List of block indices with square padding.
strides (List[int], optional, defaults to [1, 2, 2, 2, 1, 2, 1]) —
List of stride sizes to be used in each block for convolutional layers.
num_block_repeats (List[int], optional, defaults to [1, 2, 2, 3, 3, 4, 1]) —
List of the number of times each block is to repeated.
expand_ratios (List[int], optional, defaults to [1, 6, 6, 6, 6, 6, 6]) —
List of scaling coefficient of each block.
squeeze_expansion_ratio (float, optional, defaults to 0.25) —
Squeeze expansion ratio.
hidden_act (str or function, optional, defaults to "silu") —
The non-linear activation function (function or string) in each block. If string, "gelu", "relu",
"selu", “gelu_new”, “silu”and“mish”` are supported.
hiddem_dim (int, optional, defaults to 1280) —
The hidden dimension of the layer before the classification head.
pooling_type (str or function, optional, defaults to "mean") —
Type of final pooling to be applied before the dense classification head. Available options are ["mean",
"max"]
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
batch_norm_eps (float, optional, defaults to 1e-3) —
The epsilon used by the batch normalization layers.
batch_norm_momentum (float, optional, defaults to 0.99) —
The momentum used by the batch normalization layers.
dropout_rate (float, optional, defaults to 0.5) —
The dropout rate to be applied before final classifier layer.
drop_connect_rate (float, optional, defaults to 0.2) —
The drop rate for skip connections.
This is the configuration class to store the configuration of a EfficientNetModel. It is used to instantiate an
EfficientNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the EfficientNet
google/efficientnet-b7 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import EfficientNetConfig, EfficientNetModel
# Initializing a EfficientNet efficientnet-b7 style configuration
configuration = EfficientNetConfig()
# Initializing a model (with random weights) from the efficientnet-b7 style configuration
model = EfficientNetModel(configuration)
# Accessing the model configuration
configuration = model.config
EfficientNetImageProcessor
class transformers.EfficientNetImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = 0
do_center_crop: bool = False
crop_size: typing.Dict[str, int] = None
rescale_factor: typing.Union[int, float] = 0.00392156862745098
rescale_offset: bool = False
do_rescale: bool = True
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
include_top: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in preprocess.
size (Dict[str, int] optional, defaults to {"height" -- 346, "width": 346}):
Size of the image after resize. Can be overridden by size in preprocess.
resample (PILImageResampling filter, optional, defaults to PILImageResampling.NEAREST) —
Resampling filter to use if resizing the image. Can be overridden by resample in preprocess.
do_center_crop (bool, optional, defaults to False) —
Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image
is padded with 0’s and then center cropped. Can be overridden by do_center_crop in preprocess.
crop_size (Dict[str, int], optional, defaults to {"height" -- 289, "width": 289}):
Desired output size when applying center-cropping. Can be overridden by crop_size in preprocess.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
rescale_offset (bool, optional, defaults to False) —
Whether to rescale the image between [-scale_range, scale_range] instead of [0, scale_range]. Can be
overridden by the rescale_factor parameter in the preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
include_top (bool, optional, defaults to True) —
Whether to rescale the image again. Should be set to True if the inputs are used for image classification.
Constructs a EfficientNet image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: bool = None
rescale_factor: float = None
rescale_offset: bool = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
include_top: bool = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resize.
resample (PILImageResampling, optional, defaults to self.resample) —
PILImageResampling filter to use if resizing the image Only has an effect if do_resize is set to
True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the image after center crop. If one edge the image is smaller than crop_size, it will be
padded with zeros and then cropped
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
rescale_offset (bool, optional, defaults to self.rescale_offset) —
Whether to rescale the image between [-scale_range, scale_range] instead of [0, scale_range].
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
include_top (bool, optional, defaults to self.include_top) —
Rescales the image again for image classification if set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
None: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
EfficientNetModel
class transformers.EfficientNetModel
<
source
>
(
config: EfficientNetConfig
)
Parameters
config (EfficientNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare EfficientNet model outputting raw features without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
AutoImageProcessor.__call__() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EfficientNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The EfficientNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, EfficientNetModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/efficientnet-b7")
model = EfficientNetModel.from_pretrained("google/efficientnet-b7")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 768, 7, 7]
EfficientNetForImageClassification
class transformers.EfficientNetForImageClassification
<
source
>
(
config
)
Parameters
config (EfficientNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
EfficientNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g.
for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
AutoImageProcessor.__call__() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EfficientNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The EfficientNetForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, EfficientNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/efficientnet-b7")
model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b7")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←EfficientFormer
FocalNet→
EfficientNet
Overview
EfficientNetConfig
EfficientNetImageProcessor
EfficientNetModel
EfficientNetForImageClassification
|
DETR
Overview
The DETR model was proposed in End-to-End Object Detection with Transformers by
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR
consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for
object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use
things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be
naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs.
The abstract from the paper is the following:
We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the
detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression
procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the
new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via
bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries,
DETR reasons about the relations of the objects and the global image context to directly output the final set of
predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many
other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and
highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily
generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive
baselines.
This model was contributed by nielsr. The original code can be found here.
Here’s a TLDR explaining how DetrForObjectDetection works:
First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use
ResNet-50/ResNet-101). Let’s assume we also add a batch dimension. This means that the input to the backbone is a
tensor of shape (batch_size, 3, height, width), assuming the image has 3 color channels (RGB). The CNN backbone
outputs a new lower-resolution feature map, typically of shape (batch_size, 2048, height/32, width/32). This is
then projected to match the hidden dimension of the Transformer of DETR, which is 256 by default, using a
nn.Conv2D layer. So now, we have a tensor of shape (batch_size, 256, height/32, width/32). Next, the
feature map is flattened and transposed to obtain a tensor of shape (batch_size, seq_len, d_model) =
(batch_size, width/32*height/32, 256). So a difference with NLP models is that the sequence length is actually
longer than usual, but with a smaller d_model (which in NLP is typically 768 or higher).
Next, this is sent through the encoder, outputting encoder_hidden_states of the same shape (you can consider
these as image features). Next, so-called object queries are sent through the decoder. This is a tensor of shape
(batch_size, num_queries, d_model), with num_queries typically set to 100 and initialized with zeros.
These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to
the encoder, they are added to the input of each attention layer. Each object query will look for a particular object
in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers
to output decoder_hidden_states of the same shape: (batch_size, num_queries, d_model). Next, two heads
are added on top for object detection: a linear layer for classifying each object query into one of the objects or “no
object”, and a MLP to predict bounding boxes for each query.
The model is trained using a bipartite matching loss: so what we actually do is compare the predicted classes +
bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N
(so if an image only contains 4 objects, 96 annotations will just have a “no object” as class and “no bounding box” as
bounding box). The Hungarian matching algorithm is used to find
an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for
the classes) and a linear combination of the L1 and generalized IoU loss (for the
bounding boxes) are used to optimize the parameters of the model.
DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance
segmentation). DetrForSegmentation adds a segmentation mask head on top of
DetrForObjectDetection. The mask head can be trained either jointly, or in a two steps process,
where one first trains a DetrForObjectDetection model to detect bounding boxes around both
“things” (instances) and “stuff” (background things like trees, roads, sky), then freeze all the weights and train only
the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is
required for the training to be possible, since the Hungarian matching is computed using distances between boxes.
Tips:
DETR uses so-called object queries to detect objects in an image. The number of queries determines the maximum
number of objects that can be detected in a single image, and is set to 100 by default (see parameter
num_queries of DetrConfig). Note that it’s good to have some slack (in COCO, the
authors used 100, while the maximum number of objects in a COCO image is ~70).
The decoder of DETR updates the query embeddings in parallel. This is different from language models like GPT-2,
which use autoregressive decoding instead of parallel. Hence, no causal attention mask is used.
DETR adds position embeddings to the hidden states at each self-attention and cross-attention layer before projecting
to queries and keys. For the position embeddings of the image, one can choose between fixed sinusoidal or learned
absolute position embeddings. By default, the parameter position_embedding_type of
DetrConfig is set to "sine".
During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help
the model output the correct number of objects of each class. If you set the parameter auxiliary_loss of
DetrConfig to True, then prediction feedforward neural networks and Hungarian losses
are added after each decoder layer (with the FFNs sharing parameters).
If you want to train the model in a distributed environment across multiple nodes, then one should update the
num_boxes variable in the DetrLoss class of modeling_detr.py. When training on multiple nodes, this should be
set to the average number of target boxes across all nodes, as can be seen in the original implementation here.
DetrForObjectDetection and DetrForSegmentation can be initialized with
any convolutional backbone available in the timm library.
Initializing with a MobileNet backbone for example can be done by setting the backbone attribute of
DetrConfig to "tf_mobilenetv3_small_075", and then initializing the model with that
config.
DETR resizes the input images such that the shortest side is at least a certain amount of pixels while the longest is
at most 1333 pixels. At training time, scale augmentation is used such that the shortest side is randomly set to at
least 480 and at most 800 pixels. At inference time, the shortest side is set to 800. One can use
DetrImageProcessor to prepare images (and optional annotations in COCO format) for the
model. Due to this resizing, images in a batch can have different sizes. DETR solves this by padding images up to the
largest size in a batch, and by creating a pixel mask that indicates which pixels are real/which are padding.
Alternatively, one can also define a custom collate_fn in order to batch images together, using
~transformers.DetrImageProcessor.pad_and_create_pixel_mask.
The size of the images will determine the amount of memory being used, and will thus determine the batch_size.
It is advised to use a batch size of 2 per GPU. See this Github thread for more info.
There are three ways to instantiate a DETR model (depending on what you prefer):
Option 1: Instantiate DETR with pre-trained weights for entire model
Copied
from transformers import DetrForObjectDetection
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone
Copied
from transformers import DetrConfig, DetrForObjectDetection
config = DetrConfig()
model = DetrForObjectDetection(config)
Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformer
Copied
config = DetrConfig(use_pretrained_backbone=False)
model = DetrForObjectDetection(config)
As a summary, consider the following table:
Task
Object detection
Instance segmentation
Panoptic segmentation
Description
Predicting bounding boxes and class labels around objects in an image
Predicting masks around objects (i.e. instances) in an image
Predicting masks around both objects (i.e. instances) as well as “stuff” (i.e. background things like trees and roads) in an image
Model
DetrForObjectDetection
DetrForSegmentation
DetrForSegmentation
Example dataset
COCO detection
COCO detection, COCO panoptic
COCO panoptic
Format of annotations to provide to DetrImageProcessor
{‘image_id’: int, ‘annotations’: List[Dict]} each Dict being a COCO object annotation
{‘image_id’: int, ‘annotations’: List[Dict]} (in case of COCO detection) or {‘file_name’: str, ‘image_id’: int, ‘segments_info’: List[Dict]} (in case of COCO panoptic)
{‘file_name’: str, ‘image_id’: int, ‘segments_info’: List[Dict]} and masks_path (path to directory containing PNG files of the masks)
Postprocessing (i.e. converting the output of the model to COCO API)
post_process()
post_process_segmentation()
post_process_segmentation(), post_process_panoptic()
evaluators
CocoEvaluator with iou_types="bbox"
CocoEvaluator with iou_types="bbox" or "segm"
CocoEvaluator with iou_tupes="bbox" or "segm", PanopticEvaluator
In short, one should prepare the data either in COCO detection or COCO panoptic format, then use
DetrImageProcessor to create pixel_values, pixel_mask and optional
labels, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the
outputs of the model using one of the postprocessing methods of DetrImageProcessor. These can
be be provided to either CocoEvaluator or PanopticEvaluator, which allow you to calculate metrics like
mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the original repository. See the example notebooks for more info regarding evaluation.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETR.
Object Detection
All example notebooks illustrating fine-tuning DetrForObjectDetection and DetrForSegmentation on a custom dataset an be found here.
See also: Object detection task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DETR specific outputs
class transformers.models.detr.modeling_detr.DetrModelOutput
<
source
>
(
last_hidden_state: FloatTensor = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
intermediate_hidden_states: typing.Optional[torch.FloatTensor] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
intermediate_hidden_states (torch.FloatTensor of shape (config.decoder_layers, batch_size, sequence_length, hidden_size), optional, returned when config.auxiliary_loss=True) —
Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.
Base class for outputs of the DETR encoder-decoder model. This class adds one attribute to Seq2SeqModelOutput,
namely an optional stack of intermediate decoder activations, i.e. the output of each decoder layer, each of them
gone through a layernorm. This is useful when training the model with auxiliary decoding losses.
class transformers.models.detr.modeling_detr.DetrObjectDetectionOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
loss_dict: typing.Optional[typing.Dict] = None
logits: FloatTensor = None
pred_boxes: FloatTensor = None
auxiliary_outputs: typing.Optional[typing.List[typing.Dict]] = None
last_hidden_state: typing.Optional[torch.FloatTensor] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) —
Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) —
A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) —
Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) —
Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.
auxiliary_outputs (list[Dict], optional) —
Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
Output type of DetrForObjectDetection.
class transformers.models.detr.modeling_detr.DetrSegmentationOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
loss_dict: typing.Optional[typing.Dict] = None
logits: FloatTensor = None
pred_boxes: FloatTensor = None
pred_masks: FloatTensor = None
auxiliary_outputs: typing.Optional[typing.List[typing.Dict]] = None
last_hidden_state: typing.Optional[torch.FloatTensor] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) —
Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) —
A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) —
Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) —
Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.
pred_masks (torch.FloatTensor of shape (batch_size, num_queries, height/4, width/4)) —
Segmentation masks logits for all queries. See also
post_process_semantic_segmentation() or
post_process_instance_segmentation()
post_process_panoptic_segmentation() to evaluate semantic, instance and panoptic
segmentation masks respectively.
auxiliary_outputs (list[Dict], optional) —
Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
Output type of DetrForSegmentation.
DetrConfig
class transformers.DetrConfig
<
source
>
(
use_timm_backbone = True
backbone_config = None
num_channels = 3
num_queries = 100
encoder_layers = 6
encoder_ffn_dim = 2048
encoder_attention_heads = 8
decoder_layers = 6
decoder_ffn_dim = 2048
decoder_attention_heads = 8
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
is_encoder_decoder = True
activation_function = 'relu'
d_model = 256
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
init_xavier_std = 1.0
auxiliary_loss = False
position_embedding_type = 'sine'
backbone = 'resnet50'
use_pretrained_backbone = True
dilation = False
class_cost = 1
bbox_cost = 5
giou_cost = 2
mask_loss_coefficient = 1
dice_loss_coefficient = 1
bbox_loss_coefficient = 5
giou_loss_coefficient = 2
eos_coefficient = 0.1
**kwargs
)
Parameters
use_timm_backbone (bool, optional, defaults to True) —
Whether or not to use the timm library for the backbone. If set to False, will use the AutoBackbone
API.
backbone_config (PretrainedConfig or dict, optional) —
The configuration of the backbone model. Only used in case use_timm_backbone is set to False in which
case it will default to ResNetConfig().
num_channels (int, optional, defaults to 3) —
The number of input channels.
num_queries (int, optional, defaults to 100) —
Number of object queries, i.e. detection slots. This is the maximal number of objects DetrModel can
detect in a single image. For COCO, we recommend 100 queries.
d_model (int, optional, defaults to 256) —
Dimension of the layers.
encoder_layers (int, optional, defaults to 6) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 6) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 2048) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "relu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
init_xavier_std (float, optional, defaults to 1) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
auxiliary_loss (bool, optional, defaults to False) —
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
position_embedding_type (str, optional, defaults to "sine") —
Type of position embeddings to be used on top of the image features. One of "sine" or "learned".
backbone (str, optional, defaults to "resnet50") —
Name of convolutional backbone to use in case use_timm_backbone = True. Supports any convolutional
backbone from the timm package. For a list of all available models, see this
page.
use_pretrained_backbone (bool, optional, defaults to True) —
Whether to use pretrained weights for the backbone. Only supported when use_timm_backbone = True.
dilation (bool, optional, defaults to False) —
Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when
use_timm_backbone = True.
class_cost (float, optional, defaults to 1) —
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (float, optional, defaults to 5) —
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
mask_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the Focal loss in the panoptic segmentation loss.
dice_loss_coefficient (float, optional, defaults to 1) —
Relative weight of the DICE/F-1 loss in the panoptic segmentation loss.
bbox_loss_coefficient (float, optional, defaults to 5) —
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (float, optional, defaults to 0.1) —
Relative classification weight of the ‘no-object’ class in the object detection loss.
This is the configuration class to store the configuration of a DetrModel. It is used to instantiate a DETR
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the DETR
facebook/detr-resnet-50 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import DetrConfig, DetrModel
# Initializing a DETR facebook/detr-resnet-50 style configuration
configuration = DetrConfig()
# Initializing a model (with random weights) from the facebook/detr-resnet-50 style configuration
model = DetrModel(configuration)
# Accessing the model configuration
configuration = model.config
from_backbone_config
<
source
>
(
backbone_config: PretrainedConfig
**kwargs
)
→
DetrConfig
Parameters
backbone_config (PretrainedConfig) —
The backbone configuration.
Returns
DetrConfig
An instance of a configuration object
Instantiate a DetrConfig (or a derived class) from a pre-trained backbone model configuration.
to_dict
<
source
>
(
)
Serializes this instance to a Python dictionary. Override the default to_dict(). Returns:
Dict[str, any]: Dictionary of all the attributes that make up this configuration instance,
DetrImageProcessor
class transformers.DetrImageProcessor
<
source
>
(
format: typing.Union[str, transformers.models.detr.image_processing_detr.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'>
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float]] = None
image_std: typing.Union[float, typing.List[float]] = None
do_pad: bool = True
**kwargs
)
Parameters
format (str, optional, defaults to "coco_detection") —
Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
do_resize (bool, optional, defaults to True) —
Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be
overridden by the do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}):
Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter
in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image.
do_rescale (bool, optional, defaults to True) —
Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize —
Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the
preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) —
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) —
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the image_std parameter in the preprocess method.
do_pad (bool, optional, defaults to True) —
Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be
overridden by the do_pad parameter in the preprocess method.
Constructs a Detr image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
annotations: typing.Union[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]], typing.List[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]]], NoneType] = None
return_segmentation_masks: bool = None
masks_path: typing.Union[str, pathlib.Path, NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Union[int, float, NoneType] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: typing.Optional[bool] = None
format: typing.Union[str, transformers.models.detr.image_processing_detr.AnnotionFormat, NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image or batch of images to preprocess.
annotations (AnnotationType or List[AnnotationType], optional) —
List of annotations associated with the image or batch of images. If annotation is for object
detection, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“annotations” (List[Dict]): List of annotations for an image. Each annotation should be a
dictionary. An image can have no annotations, in which case the list should be empty.
If annotation is for segmentation, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary.
An image can have no segments, in which case the list should be empty.
“file_name” (str): The file name of the image.
return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) —
Whether to return segmentation masks.
masks_path (str or pathlib.Path, optional) —
Path to the directory containing the segmentation masks.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use when resizing the image.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to use when rescaling the image.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Mean to use when normalizing the image.
image_std (float or List[float], optional, defaults to self.image_std) —
Standard deviation to use when normalizing the image.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image.
format (str or AnnotionFormat, optional, defaults to self.format) —
Format of the annotations.
return_tensors (str or TensorType, optional, defaults to self.return_tensors) —
Type of tensors to return. If None, will return the list of images.
data_format (str or ChannelDimension, optional, defaults to self.data_format) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Preprocess an image or a batch of images so that it can be used by the model.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
)
→
List[Dict]
Parameters
outputs (DetrObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of DetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. Only supports PyTorch.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple[int, int]] = None
)
→
List[torch.Tensor]
Parameters
outputs (DetrForSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple[int, int]], optional) —
A list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the
batch. If unset, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of DetrForSegmentation into semantic segmentation maps. Only supports PyTorch.
post_process_instance_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
outputs (DetrForSegmentation) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If unset, predictions will not be resized.
return_coco_annotation (bool, optional) —
Defaults to False. If set to True, segmentation maps are returned in COCO run-length encoding (RLE)
format.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or
List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True. Set to None if no mask if found above threshold.
segments_info — A dictionary that contains additional information on each segment.
id — An integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
score — Prediction score of segment with segment_id.
Converts the output of DetrForSegmentation into instance segmentation predictions. Only supports PyTorch.
post_process_panoptic_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
outputs (DetrForSegmentation) —
The outputs from DetrForSegmentation.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int], optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id or
None if no mask if found above threshold. If target_sizes is specified, segmentation is resized to
the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of DetrForSegmentation into image panoptic segmentation predictions. Only supports
PyTorch.
DetrFeatureExtractor
class transformers.DetrFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
)
→
List[Dict]
Parameters
outputs (DetrObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of DetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. Only supports PyTorch.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple[int, int]] = None
)
→
List[torch.Tensor]
Parameters
outputs (DetrForSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple[int, int]], optional) —
A list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the
batch. If unset, predictions will not be resized.
Returns
List[torch.Tensor]
A list of length batch_size, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each
torch.Tensor correspond to a semantic class id.
Converts the output of DetrForSegmentation into semantic segmentation maps. Only supports PyTorch.
post_process_instance_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
return_coco_annotation: typing.Optional[bool] = False
)
→
List[Dict]
Parameters
outputs (DetrForSegmentation) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction. If unset, predictions will not be resized.
return_coco_annotation (bool, optional) —
Defaults to False. If set to True, segmentation maps are returned in COCO run-length encoding (RLE)
format.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or
List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True. Set to None if no mask if found above threshold.
segments_info — A dictionary that contains additional information on each segment.
id — An integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
score — Prediction score of segment with segment_id.
Converts the output of DetrForSegmentation into instance segmentation predictions. Only supports PyTorch.
post_process_panoptic_segmentation
<
source
>
(
outputs
threshold: float = 0.5
mask_threshold: float = 0.5
overlap_mask_area_threshold: float = 0.8
label_ids_to_fuse: typing.Optional[typing.Set[int]] = None
target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None
)
→
List[Dict]
Parameters
outputs (DetrForSegmentation) —
The outputs from DetrForSegmentation.
threshold (float, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks.
mask_threshold (float, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (float, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask.
label_ids_to_fuse (Set[int], optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person.
target_sizes (List[Tuple], optional) —
List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested
final size (height, width) of each prediction in batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
segmentation — a tensor of shape (height, width) where each pixel represents a segment_id or
None if no mask if found above threshold. If target_sizes is specified, segmentation is resized to
the corresponding target_sizes entry.
segments_info — A dictionary that contains additional information on each segment.
id — an integer representing the segment_id.
label_id — An integer representing the label / semantic class id corresponding to segment_id.
was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id.
score — Prediction score of segment with segment_id.
Converts the output of DetrForSegmentation into image panoptic segmentation predictions. Only supports
PyTorch.
DetrModel
class transformers.DetrModel
<
source
>
(
config: DetrConfig
)
Parameters
config (DetrConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without
any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.detr.modeling_detr.DetrModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See DetrImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.detr.modeling_detr.DetrModelOutput or tuple(torch.FloatTensor)
A transformers.models.detr.modeling_detr.DetrModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DetrConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
intermediate_hidden_states (torch.FloatTensor of shape (config.decoder_layers, batch_size, sequence_length, hidden_size), optional, returned when config.auxiliary_loss=True) — Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a
layernorm.
The DetrModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, DetrModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = DetrModel.from_pretrained("facebook/detr-resnet-50")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# the last hidden states are the final query embeddings of the Transformer decoder
# these are of shape (batch_size, num_queries, hidden_size)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 100, 256]
DetrForObjectDetection
class transformers.DetrForObjectDetection
<
source
>
(
config: DetrConfig
)
Parameters
config (DetrConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks
such as COCO detection.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See DetrImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (List[Dict] of len (batch_size,), optional) —
Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the
following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch
respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).
Returns
transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DetrConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.
auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
The DetrForObjectDetection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, DetrForObjectDetection
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
target_sizes = torch.tensor([image.size[::-1]])
results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[
... 0
... ]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98]
Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66]
Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76]
Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93]
Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72]
DetrForSegmentation
class transformers.DetrForSegmentation
<
source
>
(
config: DetrConfig
)
Parameters
config (DetrConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
DETR Model (consisting of a backbone and encoder-decoder Transformer) with a segmentation head on top, for tasks
such as COCO panoptic.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values
pixel_mask = None
decoder_attention_mask = None
encoder_outputs = None
inputs_embeds = None
decoder_inputs_embeds = None
labels = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.detr.modeling_detr.DetrSegmentationOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it.
Pixel values can be obtained using AutoImageProcessor. See DetrImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, num_queries), optional) —
Not used by default. Can be used to mask object queries.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you
can choose to directly pass a flattened representation of an image.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) —
Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an
embedded representation.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (List[Dict] of len (batch_size,), optional) —
Labels for computing the bipartite matching loss, DICE/F-1 loss and Focal loss. List of dicts, each
dictionary containing at least the following 3 keys: ‘class_labels’, ‘boxes’ and ‘masks’ (the class labels,
bounding boxes and segmentation masks of an image in the batch respectively). The class labels themselves
should be a torch.LongTensor of len (number of bounding boxes in the image,), the boxes a
torch.FloatTensor of shape (number of bounding boxes in the image, 4) and the masks a
torch.FloatTensor of shape (number of bounding boxes in the image, height, width).
Returns
transformers.models.detr.modeling_detr.DetrSegmentationOutput or tuple(torch.FloatTensor)
A transformers.models.detr.modeling_detr.DetrSegmentationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DetrConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.
pred_masks (torch.FloatTensor of shape (batch_size, num_queries, height/4, width/4)) — Segmentation masks logits for all queries. See also
post_process_semantic_segmentation() or
post_process_instance_segmentation()
post_process_panoptic_segmentation() to evaluate semantic, instance and panoptic
segmentation masks respectively.
auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each
layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax,
used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each
layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
The DetrForSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import io
import requests
from PIL import Image
import torch
import numpy
from transformers import AutoImageProcessor, DetrForSegmentation
from transformers.image_transforms import rgb_to_id
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
# forward pass
outputs = model(**inputs)
# Use the `post_process_panoptic_segmentation` method of the `image_processor` to retrieve post-processed panoptic segmentation maps
# Segmentation results are returned as a list of dictionaries
result = image_processor.post_process_panoptic_segmentation(outputs, target_sizes=[(300, 500)])
# A tensor of shape (height, width) where each value denotes a segment id, filled with -1 if no segment is found
panoptic_seg = result[0]["segmentation"]
# Get prediction score and segment_id to class_id mapping of each segment
panoptic_segments_info = result[0]["segments_info"]
←DETA
DiNAT→
DETR
Overview
Resources
DETR specific outputs
DetrConfig
DetrImageProcessor
DetrFeatureExtractor
DetrModel
DetrForObjectDetection
DetrForSegmentation
|
Autoformer
Overview
The Autoformer model was proposed in Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.
The abstract from the paper is the following:
Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.
This model was contributed by elisim and kashif.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Autoformer blog-post in HuggingFace blog: Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)
AutoformerConfig
class transformers.AutoformerConfig
<
source
>
(
prediction_length: typing.Optional[int] = None
context_length: typing.Optional[int] = None
distribution_output: str = 'student_t'
loss: str = 'nll'
input_size: int = 1
lags_sequence: typing.List[int] = [1, 2, 3, 4, 5, 6, 7]
scaling: bool = True
num_time_features: int = 0
num_dynamic_real_features: int = 0
num_static_categorical_features: int = 0
num_static_real_features: int = 0
cardinality: typing.Optional[typing.List[int]] = None
embedding_dimension: typing.Optional[typing.List[int]] = None
d_model: int = 64
encoder_attention_heads: int = 2
decoder_attention_heads: int = 2
encoder_layers: int = 2
decoder_layers: int = 2
encoder_ffn_dim: int = 32
decoder_ffn_dim: int = 32
activation_function: str = 'gelu'
dropout: float = 0.1
encoder_layerdrop: float = 0.1
decoder_layerdrop: float = 0.1
attention_dropout: float = 0.1
activation_dropout: float = 0.1
num_parallel_samples: int = 100
init_std: float = 0.02
use_cache: bool = True
is_encoder_decoder = True
label_length: int = 10
moving_average: int = 25
autocorrelation_factor: int = 3
**kwargs
)
Parameters
prediction_length (int) —
The prediction length for the decoder. In other words, the prediction horizon of the model.
context_length (int, optional, defaults to prediction_length) —
The context length for the encoder. If unset, the context length will be the same as the
prediction_length.
distribution_output (string, optional, defaults to "student_t") —
The distribution emission head for the model. Could be either “student_t”, “normal” or “negative_binomial”.
loss (string, optional, defaults to "nll") —
The loss function for the model corresponding to the distribution_output head. For parametric
distributions it is the negative log likelihood (nll) - which currently is the only supported one.
input_size (int, optional, defaults to 1) —
The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of
multivariate targets.
lags_sequence (list[int], optional, defaults to [1, 2, 3, 4, 5, 6, 7]) —
The lags of the input time series as covariates often dictated by the frequency. Default is [1, 2, 3, 4, 5, 6, 7].
scaling (bool, optional defaults to True) —
Whether to scale the input targets.
num_time_features (int, optional, defaults to 0) —
The number of time features in the input time series.
num_dynamic_real_features (int, optional, defaults to 0) —
The number of dynamic real valued features.
num_static_categorical_features (int, optional, defaults to 0) —
The number of static categorical features.
num_static_real_features (int, optional, defaults to 0) —
The number of static real valued features.
cardinality (list[int], optional) —
The cardinality (number of different values) for each of the static categorical features. Should be a list
of integers, having the same length as num_static_categorical_features. Cannot be None if
num_static_categorical_features is > 0.
embedding_dimension (list[int], optional) —
The dimension of the embedding for each of the static categorical features. Should be a list of integers,
having the same length as num_static_categorical_features. Cannot be None if
num_static_categorical_features is > 0.
d_model (int, optional, defaults to 64) —
Dimensionality of the transformer layers.
encoder_layers (int, optional, defaults to 2) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 2) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 2) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 2) —
Number of attention heads for each attention layer in the Transformer decoder.
encoder_ffn_dim (int, optional, defaults to 32) —
Dimension of the “intermediate” (often named feed-forward) layer in encoder.
decoder_ffn_dim (int, optional, defaults to 32) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and decoder. If string, "gelu" and
"relu" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the encoder, and decoder.
encoder_layerdrop (float, optional, defaults to 0.1) —
The dropout probability for the attention and fully connected layers for each encoder layer.
decoder_layerdrop (float, optional, defaults to 0.1) —
The dropout probability for the attention and fully connected layers for each decoder layer.
attention_dropout (float, optional, defaults to 0.1) —
The dropout probability for the attention probabilities.
activation_dropout (float, optional, defaults to 0.1) —
The dropout probability used between the two layers of the feed-forward networks.
num_parallel_samples (int, optional, defaults to 100) —
The number of samples to generate in parallel for each time step of inference.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated normal weight initialization distribution.
use_cache (bool, optional, defaults to True) —
Whether to use the past key/values attentions (if applicable to the model) to speed up decoding.
label_length (int, optional, defaults to 10) —
Start token length of the Autoformer decoder, which is used for direct multi-step prediction (i.e.
non-autoregressive generation).
moving_average (int, defaults to 25) —
The window size of the moving average. In practice, it’s the kernel size in AvgPool1d of the Decomposition
Layer.
autocorrelation_factor (int, defaults to 3) —
“Attention” (i.e. AutoCorrelation mechanism) factor which is used to find top k autocorrelations delays.
It’s recommended in the paper to set it to a number between 1 and 5.
This is the configuration class to store the configuration of an AutoformerModel. It is used to instantiate an
Autoformer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Autoformer
huggingface/autoformer-tourism-monthly
architecture.
Configuration objects inherit from PretrainedConfig can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Copied
from transformers import AutoformerConfig, AutoformerModel
# Initializing a default Autoformer configuration
configuration = AutoformerConfig()
# Randomly initializing a model (with random weights) from the configuration
model = AutoformerModel(configuration)
# Accessing the model configuration
configuration = model.config
AutoformerModel
class transformers.AutoformerModel
<
source
>
(
config: AutoformerConfig
)
Parameters
config (AutoformerConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Autoformer Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
past_values: Tensor
past_time_features: Tensor
past_observed_mask: Tensor
static_categorical_features: typing.Optional[torch.Tensor] = None
static_real_features: typing.Optional[torch.Tensor] = None
future_values: typing.Optional[torch.Tensor] = None
future_time_features: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
use_cache: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.autoformer.modeling_autoformer.AutoformerModelOutput or tuple(torch.FloatTensor)
Parameters
past_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Past values of the time series, that serve as context in order to predict the future. These values may
contain lags, i.e. additional values from the past which are added in order to serve as “extra context”.
The past_values is what the Transformer encoder gets as input (with optional additional features, such as
static_categorical_features, static_real_features, past_time_features).
The sequence length here is equal to context_length + max(config.lags_sequence).
Missing values need to be replaced with zeros.
past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features), optional) —
Optional time features, which the model internally will add to past_values. These could be things like
“month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These
could also be so-called “age” features, which basically help the model know “at which point in life” a
time-series is. Age features have small values for distant past time steps and increase monotonically the
more we approach the current time step.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features.
The Autoformer only learns additional embeddings for static_categorical_features.
past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length), optional) —
Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in
[0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) —
Optional static categorical features for which the model will learn an embedding, which it will add to the
values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) —
Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
future_values (torch.FloatTensor of shape (batch_size, prediction_length)) —
Future values of the time series, that serve as labels for the model. The future_values is what the
Transformer needs to learn to output, given the past_values.
See the demo notebook and code snippets for details.
Missing values need to be replaced with zeros.
future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features), optional) —
Optional time features, which the model internally will add to future_values. These could be things like
“month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These
could also be so-called “age” features, which basically help the model know “at which point in life” a
time-series is. Age features have small values for distant past time steps and increase monotonically the
more we approach the current time step.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional features.
The Autoformer only learns additional embeddings for static_categorical_features.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to
make sure the model can only look at previous inputs in order to predict the future.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.autoformer.modeling_autoformer.AutoformerModelOutput or tuple(torch.FloatTensor)
A transformers.models.autoformer.modeling_autoformer.AutoformerModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AutoformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
trend (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Trend tensor for each time series.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to shift back to the original magnitude.
scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to rescale back to the original magnitude.
static_features: (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The AutoformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from huggingface_hub import hf_hub_download
import torch
from transformers import AutoformerModel
file = hf_hub_download(
... repo_id="hf-internal-testing/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
batch = torch.load(file)
model = AutoformerModel.from_pretrained("huggingface/autoformer-tourism-monthly")
# during training, one provides both past and future values
# as well as possible additional features
outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
last_hidden_state = outputs.last_hidden_state
AutoformerForPrediction
class transformers.AutoformerForPrediction
<
source
>
(
config: AutoformerConfig
)
Parameters
config (AutoformerConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Autoformer Model with a distribution head on top for time-series forecasting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
past_values: Tensor
past_time_features: Tensor
past_observed_mask: Tensor
static_categorical_features: typing.Optional[torch.Tensor] = None
static_real_features: typing.Optional[torch.Tensor] = None
future_values: typing.Optional[torch.Tensor] = None
future_time_features: typing.Optional[torch.Tensor] = None
future_observed_mask: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
use_cache: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqTSPredictionOutput or tuple(torch.FloatTensor)
Parameters
past_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Past values of the time series, that serve as context in order to predict the future. These values may
contain lags, i.e. additional values from the past which are added in order to serve as “extra context”.
The past_values is what the Transformer encoder gets as input (with optional additional features, such as
static_categorical_features, static_real_features, past_time_features).
The sequence length here is equal to context_length + max(config.lags_sequence).
Missing values need to be replaced with zeros.
past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features), optional) —
Optional time features, which the model internally will add to past_values. These could be things like
“month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These
could also be so-called “age” features, which basically help the model know “at which point in life” a
time-series is. Age features have small values for distant past time steps and increase monotonically the
more we approach the current time step.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features.
The Autoformer only learns additional embeddings for static_categorical_features.
past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length), optional) —
Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in
[0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) —
Optional static categorical features for which the model will learn an embedding, which it will add to the
values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) —
Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
future_values (torch.FloatTensor of shape (batch_size, prediction_length)) —
Future values of the time series, that serve as labels for the model. The future_values is what the
Transformer needs to learn to output, given the past_values.
See the demo notebook and code snippets for details.
Missing values need to be replaced with zeros.
future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features), optional) —
Optional time features, which the model internally will add to future_values. These could be things like
“month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These
could also be so-called “age” features, which basically help the model know “at which point in life” a
time-series is. Age features have small values for distant past time steps and increase monotonically the
more we approach the current time step.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional features.
The Autoformer only learns additional embeddings for static_categorical_features.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to
make sure the model can only look at previous inputs in order to predict the future.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqTSPredictionOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqTSPredictionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (AutoformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when a future_values is provided) — Distributional loss.
params (torch.FloatTensor of shape (batch_size, num_samples, num_params)) — Parameters of the chosen distribution.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to shift back to the original magnitude.
scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to rescale back to the original magnitude.
static_features (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The AutoformerForPrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from huggingface_hub import hf_hub_download
import torch
from transformers import AutoformerForPrediction
file = hf_hub_download(
... repo_id="hf-internal-testing/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
batch = torch.load(file)
model = AutoformerForPrediction.from_pretrained("huggingface/autoformer-tourism-monthly")
# during training, one provides both past and future values
# as well as possible additional features
outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
loss = outputs.loss
loss.backward()
# during inference, one only provides past values
# as well as possible additional features
# the model autoregressively generates future values
outputs = model.generate(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_time_features=batch["future_time_features"],
... )
mean_prediction = outputs.sequences.mean(dim=1)
←Trajectory Transformer
Informer→
Autoformer
Overview
Resources
AutoformerConfig
AutoformerModel
AutoformerForPrediction
|
ErnieM
Overview
The ErnieM model was proposed in ERNIE-M: Enhanced Multilingual Representation by Aligning
Cross-lingual Semantics with Monolingual Corpora by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun,
Hao Tian, Hua Wu, Haifeng Wang.
The abstract from the paper is the following:
Recent studies have demonstrated that pre-trained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is generally acknowledged that parallel corpora are critical for improving the model performance, existing methods are often constrained by the size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance. Our key insight is to integrate back-translation into the pre-training process. We generate pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks.
Tips:
Ernie-M is a BERT-like model so it is a stacked Transformer Encoder.
Instead of using MaskedLM for pretraining (like BERT) the authors used two novel techniques: Cross-attention Masked Language Modeling and Back-translation Masked Language Modeling. For now these two LMHead objectives are not implemented here.
It is a multilingual language model.
Next Sentence Prediction was not used in pretraining process.
This model was contributed by Susnato Dhar. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Multiple choice task guide
ErnieMConfig
class transformers.ErnieMConfig
<
source
>
(
vocab_size: int = 250002
hidden_size: int = 768
num_hidden_layers: int = 12
num_attention_heads: int = 12
intermediate_size: int = 3072
hidden_act: str = 'gelu'
hidden_dropout_prob: float = 0.1
attention_probs_dropout_prob: float = 0.1
max_position_embeddings: int = 514
initializer_range: float = 0.02
pad_token_id: int = 1
layer_norm_eps: float = 1e-05
classifier_dropout = None
is_decoder = False
act_dropout = 0.0
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 250002) —
Vocabulary size of inputs_ids in ErnieMModel. Also is the vocab size of token embedding matrix.
Defines the number of different tokens that can be represented by the inputs_ids passed when calling
ErnieMModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the embedding layer, encoder layers and pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to feed-forward layers are
firstly projected from hidden_size to intermediate_size, and then projected back to hidden_size. Typically
intermediate_size is larger than hidden_size.
hidden_act (str, optional, defaults to "gelu") —
The non-linear activation function in the feed-forward layer. "gelu", "relu" and any other torch
supported activation functions are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target.
act_dropout (float, optional, defaults to 0.0) —
This dropout probability is used in ErnieMEncoderLayer after activation.
max_position_embeddings (int, optional, defaults to 512) —
The maximum value of the dimensionality of position encoding, which dictates the maximum supported length
of an input sequence.
layer_norm_eps (float, optional, defaults to 1e-05) —
The epsilon used by the layer normalization layers.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the normal initializer for initializing all weight matrices.
pad_token_id(int, optional, defaults to 1) —
The index of padding token in the token vocabulary.
This is the configuration class to store the configuration of a ErnieMModel. It is used to instantiate a
Ernie-M model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Ernie-M
susnato/ernie-m-base_pytorch architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
A normal_initializer initializes weight matrices as normal distributions. See
ErnieMPretrainedModel._init_weights() for how weights are initialized in ErnieMModel.
ErnieMTokenizer
class transformers.ErnieMTokenizer
<
source
>
(
sentencepiece_model_ckpt
vocab_file = None
do_lower_case = False
encoding = 'utf8'
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
sentencepiece_model_file (str) —
The file path of sentencepiece model.
vocab_file (str, optional) —
The file path of the vocabulary.
do_lower_case (str, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be
unk_token inorder to be converted to an ID.
sep_token (str, optional, defaults to "[SEP]") —
A special token separating two different sentences in the same input.
pad_token (str, optional, defaults to "[PAD]") —
A special token used to make arrays of tokens the same size for batching purposes.
cls_token (str, optional, defaults to "[CLS]") —
A special token used for sequence classification. It is the last token of the sequence when built with
special tokens.
mask_token (str, optional, defaults to "[MASK]") —
A special token representing a masked token. This is the token used in the masked language modeling task
which the model tries to predict the original unmasked ones.
Constructs a Ernie-M tokenizer. It uses the sentencepiece tools to cut the words to sub-words.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input_id with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An ErnieM sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0
token_ids_1 = None
already_has_special_tokens = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids of the first sequence.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (str, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer encode method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
The first tokenized sequence.
token_ids_1 (List[int], optional) —
The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs? Should be overridden in a subclass if the model has a special way of
building: those.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
ErnieMModel
class transformers.ErnieMModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (ErnieMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ErnieM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Union[<built-in method tensor of type object at 0x7f6d4baa6500>, NoneType] = None
position_ids: typing.Union[<built-in method tensor of type object at 0x7f6d4baa6500>, NoneType] = None
attention_mask: typing.Union[<built-in method tensor of type object at 0x7f6d4baa6500>, NoneType] = None
head_mask: typing.Union[<built-in method tensor of type object at 0x7f6d4baa6500>, NoneType] = None
inputs_embeds: typing.Union[<built-in method tensor of type object at 0x7f6d4baa6500>, NoneType] = None
past_key_values: typing.Union[typing.Tuple[typing.Tuple[<built-in method tensor of type object at 0x7f6d4baa6500>]], NoneType] = None
use_cache: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieMConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The ErnieMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieMModel
import torch
tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch")
model = ErnieMModel.from_pretrained("susnato/ernie-m-base_pytorch")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ErnieMForSequenceClassification
class transformers.ErnieMForSequenceClassification
<
source
>
(
config
)
Parameters
config (ErnieMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ErnieM Model transformer with a sequence classification/regression head on top (a linear layer on top of
the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.Tensor]] = None
use_cache: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = True
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ErnieMForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, ErnieMForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch")
model = ErnieMForSequenceClassification.from_pretrained("susnato/ernie-m-base_pytorch")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = ErnieMForSequenceClassification.from_pretrained("susnato/ernie-m-base_pytorch", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, ErnieMForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch")
model = ErnieMForSequenceClassification.from_pretrained("susnato/ernie-m-base_pytorch", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = ErnieMForSequenceClassification.from_pretrained(
... "susnato/ernie-m-base_pytorch", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
ErnieMForMultipleChoice
class transformers.ErnieMForMultipleChoice
<
source
>
(
config
)
Parameters
config (ErnieMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ErnieM Model with a multiple choice classification head on top (a linear layer on top of
the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = True
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ErnieMForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieMForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch")
model = ErnieMForMultipleChoice.from_pretrained("susnato/ernie-m-base_pytorch")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
ErnieMForTokenClassification
class transformers.ErnieMForTokenClassification
<
source
>
(
config
)
Parameters
config (ErnieMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ErnieM Model with a token classification head on top (a linear layer on top of
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.Tensor]] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = True
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ErnieMForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieMForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch")
model = ErnieMForTokenClassification.from_pretrained("susnato/ernie-m-base_pytorch")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
ErnieMForQuestionAnswering
class transformers.ErnieMForQuestionAnswering
<
source
>
(
config
)
Parameters
config (ErnieMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ErnieM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = True
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ErnieMForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieMForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch")
model = ErnieMForQuestionAnswering.from_pretrained("susnato/ernie-m-base_pytorch")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
ErnieMForInformationExtraction
class transformers.ErnieMForInformationExtraction
<
source
>
(
config
)
Parameters
config (ErnieMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ErnieMForInformationExtraction is a Ernie-M Model with two linear layer on top of the hidden-states output to
compute start_prob and end_prob, designed for Universal Information Extraction.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = True
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for position (index) for computing the start_positions loss. Position outside of the sequence are
not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) for computing the end_positions loss. Position outside of the sequence are not
taken into account for computing the loss.
The ErnieMForInformationExtraction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←ERNIE
ESM→
ErnieM
Overview
Documentation resources
ErnieMConfig
ErnieMTokenizer
ErnieMModel
ErnieMForSequenceClassification
ErnieMForMultipleChoice
ErnieMForTokenClassification
ErnieMForQuestionAnswering
ErnieMForInformationExtraction
|
Whisper
Overview
The Whisper model was proposed in Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
The abstract from the paper is the following:
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
Tips:
The model usually performs well without requiring any finetuning.
The architecture follows a classic encoder-decoder architecture, which means that it relies on the generate() function for inference.
Inference is currently only implemented for short-form i.e. audio is pre-segmented into <=30s segments. Long-form (including timestamps) will be implemented in a future release.
One can use WhisperProcessor to prepare audio for the model, and decode the predicted ID’s back into text.
This model was contributed by Arthur Zucker. The Tensorflow version of this model was contributed by amyeroberts.
The original code can be found here.
WhisperConfig
class transformers.WhisperConfig
<
source
>
(
vocab_size = 51865
num_mel_bins = 80
encoder_layers = 6
encoder_attention_heads = 4
decoder_layers = 6
decoder_attention_heads = 4
decoder_ffn_dim = 1536
encoder_ffn_dim = 1536
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
decoder_start_token_id = 50257
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 256
dropout = 0.0
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
scale_embedding = False
max_source_positions = 1500
max_target_positions = 448
pad_token_id = 50256
bos_token_id = 50256
eos_token_id = 50256
suppress_tokens = None
begin_suppress_tokens = [220, 50256]
use_weighted_layer_sum = False
classifier_proj_size = 256
apply_spec_augment = False
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
median_filter_width = 7
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 51865) —
Vocabulary size of the Whisper model. Defines the number of different tokens that can be represented by the
decoder_input_ids passed when calling WhisperModel
num_mel_bins (int, optional, defaults to 80) —
Number of mel features used per input features. Should correspond to the value used in the
WhisperProcessor class.
encoder_layers (int, optional, defaults to 6) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 6) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 4) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 4) —
Number of attention heads for each attention layer in the Transformer decoder.
encoder_ffn_dim (int, optional, defaults to 1536) —
Dimensionality of the “intermediate” (often named feed-forward) layer in encoder.
decoder_ffn_dim (int, optional, defaults to 1536) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_start_token_id (int, optional, defaults to 50257) —
Corresponds to the ”<|startoftranscript|>” token, which is automatically used when no decoder_input_ids
are provided to the generate function. It is used to guide the model`s generation process depending on
the task.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
is_encoder_decoder (bool, optional, defaults to True) —
Whether the model is used as an encoder/decoder or not.
activation_function (str, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
d_model (int, optional, defaults to 256) —
Dimensionality of the layers.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
max_source_positions (int, optional, defaults to 1500) —
The maximum sequence length of log-mel filter-bank features that this model might ever be used with.
max_target_positions (int, optional, defaults to 448) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
pad_token_id (int, optional, defaults to 50256) —
Padding token id.
bos_token_id (int, optional, defaults to 50256) —
Begin of stream token id.
eos_token_id (int, optional, defaults to 50256) —
End of stream token id.
suppress_tokens (List[int], optional) —
A list containing the non-speech tokens that will be used by the logit processor in the generate
function. NON_SPEECH_TOKENS and NON_SPEECH_TOKENS_MULTI each correspond to the english-only and the
multilingual model.
begin_suppress_tokens (List[int], optional, defaults to [220,50256]) —
A list containing tokens that will be supressed at the beginning of the sampling process. Initialized as
the token for " " (blank_token_id) and the eos_token_id
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of WhisperForAudioClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification. Only relevant when using an
instance of WhisperForAudioClassification.
apply_spec_augment (bool, optional, defaults to False) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates mask_time_prob*len(time_axis)/mask_time_length independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be prob_vector_start*mask_time_length. Note that overlap may decrease the
actual percentage of masked vectors. This is only relevant if apply_spec_augment == True.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates mask_feature_prob*len(feature_axis)/mask_time_length independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be prob_vector_start*mask_feature_length. Note that overlap
may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks.
median_filter_width (int, optional, defaults to 7) —
Width of the median filter used to smoothen to cross-attention outputs when computing token timestamps.
Should be an odd number.
This is the configuration class to store the configuration of a WhisperModel. It is used to instantiate a
Whisper model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Whisper
openai/whisper-tiny architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import WhisperConfig, WhisperModel
# Initializing a Whisper tiny style configuration
configuration = WhisperConfig()
# Initializing a model (with random weights) from the tiny style configuration
model = WhisperModel(configuration)
# Accessing the model configuration
configuration = model.config
WhisperTokenizer
class transformers.WhisperTokenizer
<
source
>
(
vocab_file
merges_file
normalizer_file = None
errors = 'replace'
unk_token = '<|endoftext|>'
bos_token = '<|endoftext|>'
eos_token = '<|endoftext|>'
pad_token = None
add_prefix_space = False
language = None
task = None
predict_timestamps = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
normalizer_file (str, optional, defaults to None) —
Path to the normalizer_file file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to "<|endoftext|>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to "<|endoftext|>") —
The beginning of sequence token. The decoder_start_token_id is used to set the first token as
"<|startoftranscript|>" when generating.
eos_token (str, optional, defaults to "<|endoftext|>") —
The end of sequence token.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word.
language (str, optional) —
The language of the transcription text. The corresponding language id token is appended to the start of the
sequence for multilingual speech recognition and speech translation tasks, e.g. for Spanish the token
"<|es|>" is appended to the start of sequence. This should be used for multilingual fine-tuning only.
task (str, optional) —
Task identifier to append at the start of sequence (if any). This should be used for mulitlingual
fine-tuning, with "transcribe" for speech recognition and "translate" for speech translation.
predict_timestamps (bool, optional, defaults to False) —
Whether to omit the <|notimestamps|> token at the start of the sequence.
Construct a Whisper tokenizer.
This tokenizer inherits from PreTrainedTokenizer which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
set_prefix_tokens
<
source
>
(
language: str = None
task: str = None
predict_timestamps: bool = None
)
Parameters
language (str, optional, defaults to None) —
The language of the transcription text.
task (str, optional, defaults to None) —
Task identifier to append at the start of sequence (if any).
predict_timestamps (bool, optional, defaults to None) —
Whether to omit the <|notimestamps|> token at the start of the sequence.
Override the prefix tokens appended to the start of the label sequence. This method can be used standalone to
update the prefix tokens as required when fine-tuning. Example:
Copied
# instantiate the tokenizer and set the prefix token to Spanish
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny", language="spanish")
# now switch the prefix token from Spanish to French
tokenizer.set_prefix_tokens(language="french")
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
Build model inputs from a sequence by appending eos_token_id.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
WhisperTokenizerFast
class transformers.WhisperTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
normalizer_file = None
tokenizer_file = None
unk_token = '<|endoftext|>'
bos_token = '<|endoftext|>'
eos_token = '<|endoftext|>'
add_prefix_space = False
language = None
task = None
predict_timestamps = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
normalizer_file (str, optional, defaults to None) —
Path to the normalizer_file file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to "<|endoftext|>") —
The beginning of sequence token. The decoder_start_token_id is used to set the first token as
"<|startoftranscript|>" when generating.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Whisper tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
language (str, optional) —
The language of the transcription text. The corresponding language id token is appended to the start of the
sequence for multilingual speech recognition and speech translation tasks, e.g. for Spanish the token
"<|es|>" is appended to the start of sequence. This should be used for multilingual fine-tuning only.
task (str, optional) —
Task identifier to append at the start of sequence (if any). This should be used for mulitlingual
fine-tuning, with "transcribe" for speech recognition and "translate" for speech translation.
predict_timestamps (bool, optional, defaults to False) —
Whether to omit the <|notimestamps|> token at the start of the sequence.
Construct a “fast” Whisper tokenizer (backed by HuggingFace’s tokenizers library).
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
set_prefix_tokens
<
source
>
(
language: str = None
task: str = None
predict_timestamps: bool = None
)
Parameters
language (str, optional, defaults to None) —
The language of the transcription text.
task (str, optional, defaults to None) —
Task identifier to append at the start of sequence (if any).
predict_timestamps (bool, optional, defaults to None) —
Whether to omit the <|notimestamps|> token at the start of the sequence.
Override the prefix tokens appended to the start of the label sequence. This method can be used standalone to
update the prefix tokens as required when fine-tuning. Example:
Copied
# instantiate the tokenizer and set the prefix token to Spanish
tokenizer = WhisperTokenizerFast.from_pretrained("openai/whisper-tiny", language="spanish")
# now switch the prefix token from Spanish to French
tokenizer.set_prefix_tokens(language="french")
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
Build model inputs from a sequence by appending eos_token_id.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
WhisperFeatureExtractor
class transformers.WhisperFeatureExtractor
<
source
>
(
feature_size = 80
sampling_rate = 16000
hop_length = 160
chunk_length = 30
n_fft = 400
padding_value = 0.0
return_attention_mask = False
**kwargs
)
Parameters
feature_size (int, defaults to 80) —
The feature dimension of the extracted features.
sampling_rate (int, defaults to 16000) —
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
hop_length (int, defaults to 160) —
Length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients.
chunk_length (int, defaults to 30) —
The maximum number of chuncks of sampling_rate samples used to trim and pad longer or shorter audio
sequences.
n_fft (int, defaults to 400) —
Size of the Fourier transform.
padding_value (float, optional, defaults to 0.0) —
Padding value used to pad the audio. Should correspond to silences.
Constructs a Whisper feature extractor.
This feature extractor inherits from SequenceFeatureExtractor which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
This class extracts mel-filter bank features from raw speech using a custom numpy implementation of the Short Time Fourier Transform which should match pytorch’s torch.stft equivalent.
__call__
<
source
>
(
raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]
truncation: bool = True
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_attention_mask: typing.Optional[bool] = None
padding: typing.Optional[str] = 'max_length'
max_length: typing.Optional[int] = None
sampling_rate: typing.Optional[int] = None
do_normalize: typing.Optional[bool] = None
**kwargs
)
Parameters
raw_speech (np.ndarray, List[float], List[np.ndarray], List[List[float]]) —
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
stereo, i.e. single float per timestep.
truncation (bool, optional, default to True) —
Activates truncation to cut input sequences longer than max_length to max_length.
pad_to_multiple_of (int, optional, defaults to None) —
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor’s default.
What are attention masks?
For Whisper models, attention_mask should always be passed for batched inference, to avoid subtle
bugs.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
sampling_rate (int, optional) —
The sampling rate at which the raw_speech input was sampled. It is strongly recommended to pass
sampling_rate at the forward call to prevent silent errors and allow automatic speech recognition
pipeline.
padding_value (float, defaults to 0.0) —
The value that is used to fill the padding values / vectors.
do_normalize (bool, optional, defaults to False) —
Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
improve the performance of the model.
Main method to featurize and prepare for the model one or several sequence(s).
WhisperProcessor
class transformers.WhisperProcessor
<
source
>
(
feature_extractor
tokenizer
)
Parameters
feature_extractor (WhisperFeatureExtractor) —
An instance of WhisperFeatureExtractor. The feature extractor is a required input.
tokenizer (WhisperTokenizer) —
An instance of WhisperTokenizer. The tokenizer is a required input.
Constructs a Whisper processor which wraps a Whisper feature extractor and a Whisper tokenizer into a single
processor.
WhisperProcessor offers all the functionalities of WhisperFeatureExtractor and WhisperTokenizer. See
the call() and decode() for more information.
__call__
<
source
>
(
*args
**kwargs
)
Forwards the audio argument to WhisperFeatureExtractor’s call() and the text
argument to call(). Please refer to the doctsring of the above two methods for more
information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
cache_dir: typing.Union[str, os.PathLike, NoneType] = None
force_download: bool = False
local_files_only: bool = False
token: typing.Union[bool, str, NoneType] = None
revision: str = 'main'
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json.
**kwargs —
Additional keyword arguments passed along to both
from_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor
from_pretrained(), image processor
ImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the
methods above for more information.
save_pretrained
<
source
>
(
save_directory
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs (Dict[str, Any], optional) —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it
can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and
save_pretrained(). Please refer to the docstrings of the
methods above for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to WhisperTokenizer’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to WhisperTokenizer’s decode(). Please refer to
the docstring of this method for more information.
WhisperModel
class transformers.WhisperModel
<
source
>
(
config: WhisperConfig
)
Parameters
config (WhisperConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Whisper Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_features: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
decoder_inputs_embeds: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_features (torch.FloatTensor of shape (batch_size, feature_size, sequence_length)) —
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the mel features, padding and conversion into a
tensor of type torch.FloatTensor. See call()
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing SpecAugment data augmentation on padding token indices. Mask values selected in
[0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using WhisperTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Whisper uses the decoder_start_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_whisper._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the BART
paper for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WhisperConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The WhisperModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoFeatureExtractor, WhisperModel
from datasets import load_dataset
model = WhisperModel.from_pretrained("openai/whisper-base")
feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt")
input_features = inputs.input_features
decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id
last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state
list(last_hidden_state.shape)
[1, 2, 512]
_mask_input_features
<
source
>
(
input_features: FloatTensor
attention_mask: typing.Optional[torch.LongTensor] = None
)
Masks extracted features along time axis and/or along feature axis according to
SpecAugment.
WhisperForConditionalGeneration
class transformers.WhisperForConditionalGeneration
<
source
>
(
config: WhisperConfig
)
Parameters
config (WhisperConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Whisper Model with a language modeling head. Can be used for automatic speech recognition.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_features: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
decoder_inputs_embeds: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_features (torch.FloatTensor of shape (batch_size, feature_size, sequence_length)) —
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the mel features, padding and conversion into a
tensor of type torch.FloatTensor. See call()
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing SpecAugment data augmentation on padding token indices. Mask values selected in
[0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using WhisperTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Whisper uses the decoder_start_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_whisper._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the BART
paper for more information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is
only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WhisperConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The WhisperForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoProcessor, WhisperForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], return_tensors="pt")
input_features = inputs.input_features
generated_ids = model.generate(inputs=input_features)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
transcription
' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'
WhisperForAudioClassification
class transformers.WhisperForAudioClassification
<
source
>
(
config
)
Parameters
input_features (torch.FloatTensor of shape (batch_size, feature_size, sequence_length)) —
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the mel features, padding and conversion into a
tensor of type torch.FloatTensor. See call()
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Whisper Encoder Model with a sequence classification head on top (a linear layer over the pooled output) for tasks
like SUPERB Keyword Spotting.
forward
<
source
>
(
input_features: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_features (torch.FloatTensor of shape (batch_size, feature_size, sequence_length)) —
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the mel features, padding and conversion into a
tensor of type torch.FloatTensor. See call()
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WhisperConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The WhisperForAudioClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoFeatureExtractor, WhisperForAudioClassification
from datasets import load_dataset
feature_extractor = AutoFeatureExtractor.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id")
model = WhisperForAudioClassification.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id")
ds = load_dataset("google/fleurs", "all", split="validation", streaming=True)
sample = next(iter(ds))
inputs = feature_extractor(
... sample["audio"]["array"], sampling_rate=sample["audio"]["sampling_rate"], return_tensors="pt"
... )
input_features = inputs.input_features
with torch.no_grad():
... logits = model(input_features).logits
predicted_class_ids = torch.argmax(logits).item()
predicted_label = model.config.id2label[predicted_class_ids]
predicted_label
'Afrikaans'
TFWhisperModel
class transformers.TFWhisperModel
<
source
>
(
*args
**kwargs
)
Parameters
config (WhisperConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Whisper Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
call
<
source
>
(
input_features: TFModelInputType | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
decoder_position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
decoder_inputs_embeds: Optional[Tuple[Union[np.ndarray, tf.Tensor]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_features (tf.Tensor of shape (batch_size, feature_size, sequence_length)) —
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g.
via the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the fbank features, padding and conversion into a
tensor of type tf.Tensor. See call()
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using SpeechToTextTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SpeechToText uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_whisper._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(tf.Tensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(tf.Tensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(tf.Tensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (tf.Tensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (WhisperConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFWhisperModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import tensorflow as tf
from transformers import TFWhisperModel, AutoFeatureExtractor
from datasets import load_dataset
model = TFWhisperModel.from_pretrained("openai/whisper-base")
feature_extractor = AutoFeatureExtractor.from_pretrained("openai/whisper-base")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="tf")
input_features = inputs.input_features
decoder_input_ids = tf.convert_to_tensor([[1, 1]]) * model.config.decoder_start_token_id
last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state
list(last_hidden_state.shape)
[1, 2, 512]
TFWhisperForConditionalGeneration
class transformers.TFWhisperForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (WhisperConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Whisper Model with a language modeling head. Can be used for automatic speech recognition.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
call
<
source
>
(
input_features: TFModelInputType | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
decoder_position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
decoder_inputs_embeds: Optional[Tuple[Union[np.ndarray, tf.Tensor]]] = None
labels: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_features (tf.Tensor of shape (batch_size, feature_size, sequence_length)) —
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g.
via the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the fbank features, padding and conversion into a
tensor of type tf.Tensor. See call()
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using SpeechToTextTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SpeechToText uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read
modeling_whisper._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(tf.Tensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(tf.Tensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(tf.Tensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (tf.Tensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is
only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (WhisperConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFWhisperForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import tensorflow as tf
from transformers import AutoProcessor, TFWhisperForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], return_tensors="tf")
input_features = inputs.input_features
generated_ids = model.generate(input_features=input_features)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
transcription
' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'
FlaxWhisperModel
class transformers.FlaxWhisperModel
<
source
>
(
config: WhisperConfig
input_shape: typing.Tuple[int] = (1, 80, 3000)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (WhisperConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision
inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters. If you wish to change the dtype of the model parameters, see to_fp16()
and to_bf16().
The bare Whisper Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_features: Array
decoder_input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_features (numpy.ndarray of shape (batch_size, feature_size, sequence_length)) —
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
WhisperFeatureExtractor should be used for extracting the features, padding and conversion into a
tensor of type numpy.ndarray. See call()
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Whisper does not support masking of the input_features, this argument is preserved for compatibility, but
is not used. By default the silence in the input log mel spectrogram are ignored.
decoder_input_ids (numpy.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using
WhisperTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs? Whisper uses the decoder_start_token_id as
the starting token for decoder_input_ids generation.
decoder_attention_mask (numpy.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1
in the paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Whisper does not use position_ids in the encoder as input_features is always the same size and doesn’t
use masking, but this argument is preserved for compatibility. By default the silence in the input log mel
spectrogram are ignored.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WhisperConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxWhisperPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxWhisperModel
tokenizer = AutoTokenizer.from_pretrained("openai/whisper-tiny")
model = FlaxWhisperModel.from_pretrained("openai/whisper-tiny")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxWhisperForConditionalGeneration
class transformers.FlaxWhisperForConditionalGeneration
<
source
>
(
config: WhisperConfig
input_shape: typing.Tuple[int] = (1, 80, 3000)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (WhisperConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision
inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters. If you wish to change the dtype of the model parameters, see to_fp16()
and to_bf16().
The Whisper Model with a language modeling head.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_features: Array
decoder_input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_features (numpy.ndarray of shape (batch_size, feature_size, sequence_length)) —
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
WhisperFeatureExtractor should be used for extracting the features, padding and conversion into a
tensor of type numpy.ndarray. See call()
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Whisper does not support masking of the input_features, this argument is preserved for compatibility, but
is not used. By default the silence in the input log mel spectrogram are ignored.
decoder_input_ids (numpy.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using
WhisperTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs? Whisper uses the decoder_start_token_id as
the starting token for decoder_input_ids generation.
decoder_attention_mask (numpy.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1
in the paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Whisper does not use position_ids in the encoder as input_features is always the same size and doesn’t
use masking, but this argument is preserved for compatibility. By default the silence in the input log mel
spectrogram are ignored.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WhisperConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxWhisperPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Transcription example:
Copied
from transformers import WhisperProcessor, FlaxWhisperForConditionalGeneration
from datasets import load_dataset
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
model = FlaxWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en", from_pt=True)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], return_tensors="np")
input_features = inputs.input_features
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
transcription
' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'
FlaxWhisperForAudioClassification
class transformers.FlaxWhisperForAudioClassification
<
source
>
(
config: WhisperConfig
input_shape: typing.Tuple[int] = (1, 80, 3000)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (WhisperConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision
inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters. If you wish to change the dtype of the model parameters, see to_fp16()
and to_bf16().
The Whisper Model with an audio classification head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads
etc.) This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_features: Array
attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
**kwargs
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_features (numpy.ndarray of shape (batch_size, feature_size, sequence_length)) —
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
WhisperFeatureExtractor should be used for extracting the features, padding and conversion into a
tensor of type numpy.ndarray. See call()
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Whisper does not support masking of the input_features, this argument is preserved for compatibility, but
is not used. By default the silence in the input log mel spectrogram are ignored.
decoder_input_ids (numpy.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using
WhisperTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are decoder input IDs? Whisper uses the decoder_start_token_id as
the starting token for decoder_input_ids generation.
decoder_attention_mask (numpy.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1
in the paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Whisper does not use position_ids in the encoder as input_features is always the same size and doesn’t
use masking, but this argument is preserved for compatibility. By default the silence in the input log mel
spectrogram are ignored.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (WhisperConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxWhisperForAudioClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Transcription example:
Copied
import jax.numpy as jnp
from transformers import AutoFeatureExtractor, FlaxWhisperForAudioClassification
from datasets import load_dataset
feature_extractor = AutoFeatureExtractor.from_pretrained("sanchit-gandhi/whisper-medium-fleurs-lang-id")
model = FlaxWhisperForAudioClassification.from_pretrained(
... "sanchit-gandhi/whisper-medium-fleurs-lang-id", from_pt=True
... )
ds = load_dataset("google/fleurs", "all", split="validation", streaming=True)
sample = next(iter(ds))
inputs = feature_extractor(
... sample["audio"]["array"], sampling_rate=sample["audio"]["sampling_rate"], return_tensors="np"
... )
input_features = inputs.input_features
logits = model(input_features).logits
predicted_class_ids = jnp.argmax(logits).item()
predicted_label = model.config.id2label[predicted_class_ids]
predicted_label
'af_za'
←WavLM
XLS-R→
Whisper
Overview
WhisperConfig
WhisperTokenizer
WhisperTokenizerFast
WhisperFeatureExtractor
WhisperProcessor
WhisperModel
WhisperForConditionalGeneration
WhisperForAudioClassification
TFWhisperModel
TFWhisperForConditionalGeneration
FlaxWhisperModel
FlaxWhisperForConditionalGeneration
FlaxWhisperForAudioClassification
|
Hybrid Vision Transformer (ViT Hybrid)
Overview
The hybrid Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob
Uszkoreit, Neil Houlsby. It’s the first paper that successfully trains a Transformer encoder on ImageNet, attaining
very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the plain Vision Transformer,
by leveraging a convolutional backbone (specifically, BiT) whose features are used as initial “tokens” for the Transformer.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its
applications to computer vision remain limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional networks while keeping their overall
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.),
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
substantially fewer computational resources to train.
This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT Hybrid.
Image Classification
ViTHybridForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTHybridConfig
class transformers.ViTHybridConfig
<
source
>
(
backbone_config = None
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
image_size = 224
patch_size = 1
num_channels = 3
backbone_featmap_shape = [1, 1024, 24, 24]
qkv_bias = True
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 1) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
backbone_config (Union[Dict[str, Any], PretrainedConfig], optional, defaults to None) —
The configuration of the backbone in a dictionary or the config object of the backbone.
backbone_featmap_shape (List[int], optional, defaults to [1, 1024, 24, 24]) —
Used only for the hybrid embedding type. The shape of the feature maps of the backbone.
This is the configuration class to store the configuration of a ViTHybridModel. It is used to instantiate a ViT
Hybrid model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the ViT Hybrid
google/vit-hybrid-base-bit-384 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ViTHybridConfig, ViTHybridModel
# Initializing a ViT Hybrid vit-hybrid-base-bit-384 style configuration
configuration = ViTHybridConfig()
# Initializing a model (with random weights) from the vit-hybrid-base-bit-384 style configuration
model = ViTHybridModel(configuration)
# Accessing the model configuration
configuration = model.config
to_dict
<
source
>
(
)
Serializes this instance to a Python dictionary. Override the default to_dict(). Returns:
Dict[str, any]: Dictionary of all the attributes that make up this configuration instance,
ViTHybridImageProcessor
class transformers.ViTHybridImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the
preprocess method.
crop_size (Dict[str, int] optional, defaults to 224) —
Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess
method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in
the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess
method.
do_normalize —
Whether to normalize the image. Can be overridden by do_normalize in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Image standard deviation.
do_convert_rgb (bool, optional, defaults to True) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a ViT Hybrid image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: int = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use for normalization. Only has an effect if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use for normalization. Only has an effect if do_normalize is set to
True.
do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) —
Whether to convert the image to RGB.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: defaults to the channel dimension format of the input image.
Preprocess an image or batch of images.
ViTHybridModel
class transformers.ViTHybridModel
<
source
>
(
config: ViTHybridConfig
add_pooling_layer: bool = True
use_mask_token: bool = False
)
Parameters
config (ViTHybridConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ViT Hybrid Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
interpolate_pos_encoding: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViTHybridImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTHybridConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViTHybridModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ViTHybridModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/vit-hybrid-base-bit-384")
model = ViTHybridModel.from_pretrained("google/vit-hybrid-base-bit-384")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 197, 768]
ViTHybridForImageClassification
class transformers.ViTHybridForImageClassification
<
source
>
(
config: ViTHybridConfig
)
Parameters
config (ViTHybridConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ViT Hybrid Model transformer with an image classification head on top (a linear layer on top of the final hidden
state of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
interpolate_pos_encoding: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViTHybridImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTHybridConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViTHybridForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ViTHybridForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/vit-hybrid-base-bit-384")
model = ViTHybridForImageClassification.from_pretrained("google/vit-hybrid-base-bit-384")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←Vision Transformer (ViT)
ViTMAE→
Hybrid Vision Transformer (ViT Hybrid)
Overview
Resources
ViTHybridConfig
ViTHybridImageProcessor
ViTHybridModel
ViTHybridForImageClassification
|
BARTpho
Overview
The BARTpho model was proposed in BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
The abstract from the paper is the following:
We present BARTpho with two versions — BARTpho_word and BARTpho_syllable — the first public large-scale monolingual
sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the “large” architecture and pre-training
scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments
on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho
outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future
research and applications of generative Vietnamese NLP tasks.
Example of use:
Copied
import torch
from transformers import AutoModel, AutoTokenizer
bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable")
tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable")
line = "Chúng tôi là những nghiên cứu viên."
input_ids = tokenizer(line, return_tensors="pt")
with torch.no_grad():
... features = bartpho(**input_ids) # Models outputs are now tuples
# With TensorFlow 2.0+:
from transformers import TFAutoModel
bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable")
input_ids = tokenizer(line, return_tensors="tf")
features = bartpho(**input_ids)
Tips:
Following mBART, BARTpho uses the “large” architecture of BART with an additional layer-normalization layer on top of
both the encoder and decoder. Thus, usage examples in the documentation of BART, when adapting to use
with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts.
For example:
Copied
from transformers import MBartForConditionalGeneration
bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable")
TXT = "Chúng tôi là <mask> nghiên cứu viên."
input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
logits = bartpho(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
print(tokenizer.decode(predictions).split())
This implementation is only for tokenization: “monolingual_vocab_file” consists of Vietnamese-specialized types
extracted from the pre-trained SentencePiece model “vocab_file” that is available from the multilingual XLM-RoBERTa.
Other languages, if employing this pre-trained multilingual SentencePiece model “vocab_file” for subword
segmentation, can reuse BartphoTokenizer with their own language-specialized “monolingual_vocab_file”.
This model was contributed by dqnguyen. The original code can be found here.
BartphoTokenizer
class transformers.BartphoTokenizer
<
source
>
(
vocab_file
monolingual_vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file. This vocabulary is the pre-trained SentencePiece model available from the
multilingual XLM-RoBERTa, also used in mBART, consisting of 250K types.
monolingual_vocab_file (str) —
Path to the monolingual vocabulary file. This monolingual vocabulary consists of Vietnamese-specialized
types extracted from the multilingual vocabulary vocab_file of 250K types.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from XLMRobertaTokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An BARTPho sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (strings for sub-words) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BARTPho does not
make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
←BARThez
BERT→
BARTpho
Overview
BartphoTokenizer
|
UMT5
Overview
The UMT5 model was proposed in UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant.
The abstract from the paper is the following:
Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language’s corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.
Tips:
UMT5 was only pre-trained on mC4 excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model.
Since umT5 was pre-trained in an unsupervise manner, there’s no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Google has released the following variants:
google/umt5-small
google/umt5-base
google/umt5-xl
google/umt5-xxl.
This model was contributed by agemagician and stefan-it. The original code can be
found here.
One can refer to T5’s documentation page for more tips, code examples and notebooks.
Differences with mT5?
`UmT5` is based on mT5, with a non-shared relative positional bias that is computed for each layer. This means that the model set `has_relative_bias` for each layer.
The conversion script is also different because the model was saved in t5x's latest checkpointing format.
Sample usage
Copied
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/umt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
inputs = tokenizer(
... "A <extra_id_0> walks into a bar and orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>.",
... return_tensors="pt",
... )
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs))
['<pad><extra_id_0>nyone who<extra_id_1> drink<extra_id_2> a<extra_id_3> alcohol<extra_id_4> A<extra_id_5> A. This<extra_id_6> I<extra_id_7><extra_id_52><extra_id_53></s>']
UMT5Config
class transformers.UMT5Config
<
source
>
(
vocab_size = 250112
d_model = 512
d_kv = 64
d_ff = 1024
num_layers = 8
num_decoder_layers = None
num_heads = 6
relative_attention_num_buckets = 32
relative_attention_max_distance = 128
dropout_rate = 0.1
layer_norm_epsilon = 1e-06
initializer_factor = 1.0
feed_forward_proj = 'gated-gelu'
is_encoder_decoder = True
use_cache = True
tokenizer_class = 'T5Tokenizer'
tie_word_embeddings = True
pad_token_id = 0
eos_token_id = 1
decoder_start_token_id = 0
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 250112) —
Vocabulary size of the UMT5 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling UMT5Model or TFUMT5Model.
d_model (int, optional, defaults to 512) —
Size of the encoder layers and the pooler layer.
d_kv (int, optional, defaults to 64) —
Size of the key, query, value projections per attention head. d_kv has to be equal to d_model // num_heads.
d_ff (int, optional, defaults to 1024) —
Size of the intermediate feed forward layer in each UMT5Block.
num_layers (int, optional, defaults to 8) —
Number of hidden layers in the Transformer encoder.
num_decoder_layers (int, optional) —
Number of hidden layers in the Transformer decoder. Will use the same value as num_layers if not set.
num_heads (int, optional, defaults to 6) —
Number of attention heads for each attention layer in the Transformer encoder.
relative_attention_num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer.
relative_attention_max_distance (int, optional, defaults to 128) —
The maximum distance of the longer sequences for the bucket separation.
dropout_rate (float, optional, defaults to 0.1) —
The ratio for all dropout layers.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
initializer_factor (float, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
feed_forward_proj (string, optional, defaults to "gated-gelu") —
Type of feed forward layer to be used. Should be one of "relu" or "gated-gelu".
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a UMT5Model. It is used to instantiate a UMT5
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the UMT5
google/umt5-small architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
UMT5Model
class transformers.UMT5Model
<
source
>
(
config
)
Parameters
config (UMT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare UMT5 Model transformer outputting raw hidden-states without any specific head on top.
The UMT5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Examples:
Copied
from transformers import UMT5Model, AutoTokenizer
model = UMT5Model.from_pretrained("google/umt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
noisy_text = "UN Offizier sagt, dass weiter <extra_id_0> werden muss in Syrien."
label = "<extra_id_0> verhandelt"
inputs = tokenizer(inputs, return_tensors="pt")
labels = tokenizer(label=label, return_tensors="pt")
outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=labels["input_ids"])
hidden_states = outputs.last_hidden_state
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a UMT5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
UMT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at UMT5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UMT5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The UMT5Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, UMT5Model
tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
model = UMT5Model.from_pretrained("google/umt5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# preprocess: Prepend decoder_input_ids with start token which is pad token for UMT5Model.
# This is not needed for torch's UMT5ForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
UMT5ForConditionalGeneration
class transformers.UMT5ForConditionalGeneration
<
source
>
(
config
)
Parameters
config (UMT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UMT5 Model with a language modeling head on top.
The UMT5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Examples:
Copied
from transformers import UMT5ForConditionalGeneration, AutoTokenizer
model = UMT5ForConditionalGeneration.from_pretrained("google/umt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
summary = "Weiter Verhandlung in Syrien."
inputs = tokenizer(article, text_target=summary, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a UMT5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
UMT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at UMT5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UMT5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The UMT5ForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, UMT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
model = UMT5ForConditionalGeneration.from_pretrained("google/umt5-small")
# training
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
# inference
input_ids = tokenizer("Studies have shown that <extra_id_0> good for you", return_tensors="pt").input_ids
outputs = model.generate(input_ids)
tokenizer.decode(outputs[0], skip_special_tokens=True)
UMT5EncoderModel
class transformers.UMT5EncoderModel
<
source
>
(
config
)
Parameters
config (UMT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare UMT5 Model transformer outputting encoder’s raw hidden-states without any specific head on top.
The UMT5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Examples:
Copied
from transformers import UMT5EncoderModel, AutoTokenizer
model = UMT5EncoderModel.from_pretrained("google/umt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
input_ids = tokenizer(article, return_tensors="pt").input_ids
outputs = model(input_ids)
hidden_state = outputs.last_hidden_state
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a UMT5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UMT5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UMT5EncoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, UMT5EncoderModel
tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
model = UMT5EncoderModel.from_pretrained("google/umt5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
outputs = model(input_ids=input_ids)
last_hidden_states = outputs.last_hidden_state
UMT5ForQuestionAnswering
class transformers.UMT5ForQuestionAnswering
<
source
>
(
config
)
Parameters
config (UMT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UMT5 Model with a span classification head on top for extractive question-answering tasks like SQuAD (linear layers
on top of the hidden-states output to compute span start logits and span end logits).
The UMT5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a UMT5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
UMT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at UMT5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UMT5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The UMT5ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←UL2
X-MOD→
Sample usage
UMT5Config
UMT5Model
UMT5ForConditionalGeneration
UMT5EncoderModel
UMT5ForQuestionAnswering
|
Speech2Text2
Overview
The Speech2Text2 model is used together with Wav2Vec2 for Speech Translation models proposed in
Large-Scale Self- and Semi-Supervised Learning for Speech Translation by
Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
Speech2Text2 is a decoder-only transformer model that can be used with any speech encoder-only, such as
Wav2Vec2 or HuBERT for Speech-to-Text tasks. Please refer to the
SpeechEncoderDecoder class on how to combine Speech2Text2 with any speech encoder-only
model.
This model was contributed by Patrick von Platen.
The original code can be found here.
Tips:
Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see
the official models .
Speech2Text2 is always used within the SpeechEncoderDecoder framework.
Speech2Text2’s tokenizer is based on fastBPE.
Inference
Speech2Text2’s SpeechEncoderDecoderModel model accepts raw waveform input values from speech and
makes use of generate() to translate the input speech
autoregressively to the target language.
The Wav2Vec2FeatureExtractor class is responsible for preprocessing the input speech and
Speech2Text2Tokenizer decodes the generated target tokens to the target string. The
Speech2Text2Processor wraps Wav2Vec2FeatureExtractor and
Speech2Text2Tokenizer into a single instance to both extract the input features and decode the
predicted token ids.
Step-by-step Speech Translation
Copied
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoderModel.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(inputs=inputs["input_values"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
Speech Translation via Pipelines
The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code
Copied
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline(
... "automatic-speech-recognition",
... model="facebook/s2t-wav2vec2-large-en-de",
... feature_extractor="facebook/s2t-wav2vec2-large-en-de",
... )
translation_de = asr(librispeech_en[0]["file"])
See model hub to look for Speech2Text2 checkpoints.
Documentation resources
Causal language modeling task guide
Speech2Text2Config
class transformers.Speech2Text2Config
<
source
>
(
vocab_size = 10000
decoder_layers = 6
decoder_ffn_dim = 2048
decoder_attention_heads = 4
decoder_layerdrop = 0.0
use_cache = True
activation_function = 'relu'
d_model = 256
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 2
scale_embedding = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
max_target_positions = 1024
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the Speech2Text model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling Speech2TextModel
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the pooler. If string, "gelu", "relu",
"silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
https://arxiv.org/abs/1909.11556>`__ for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
max_target_positions (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
This is the configuration class to store the configuration of a Speech2Text2ForCausalLM. It is used to
instantiate an Speech2Text2 model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Speech2Text2
facebook/s2t-wav2vec2-large-en-de architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Speech2Text2Config, Speech2Text2ForCausalLM
# Initializing a Speech2Text2 s2t_transformer_s style configuration
configuration = Speech2Text2Config()
# Initializing a model (with random weights) from the s2t_transformer_s style configuration
model = Speech2Text2ForCausalLM(configuration)
# Accessing the model configuration
configuration = model.config
Speech2TextTokenizer
class transformers.Speech2Text2Tokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
pad_token = '<pad>'
eos_token = '</s>'
unk_token = '<unk>'
do_lower_case = False
merges_file = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
bos_token (str, optional, defaults to "<s>") —
The beginning of sentence token.
eos_token (str, optional, defaults to "</s>") —
The end of sentence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
**kwargs —
Additional keyword arguments passed along to PreTrainedTokenizer
Constructs a Speech2Text2Tokenizer.
This tokenizer inherits from PreTrainedTokenizer which contains some of the main methods. Users should refer to
the superclass for more information regarding such methods.
batch_decode
<
source
>
(
sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
**kwargs
)
→
List[str]
Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces. If None, will default to
self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) —
Will be passed to the underlying model specific decode method.
Returns
List[str]
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
decode
<
source
>
(
token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
**kwargs
)
→
str
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces. If None, will default to
self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) —
Will be passed to the underlying model specific decode method.
Returns
str
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
Speech2Text2Processor
class transformers.Speech2Text2Processor
<
source
>
(
feature_extractor
tokenizer
)
Parameters
feature_extractor (AutoFeatureExtractor) —
An instance of AutoFeatureExtractor. The feature extractor is a required input.
tokenizer (Speech2Text2Tokenizer) —
An instance of Speech2Text2Tokenizer. The tokenizer is a required input.
Constructs a Speech2Text2 processor which wraps a Speech2Text2 feature extractor and a Speech2Text2 tokenizer into
a single processor.
Speech2Text2Processor offers all the functionalities of AutoFeatureExtractor and Speech2Text2Tokenizer.
See the call() and decode() for more information.
__call__
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to AutoFeatureExtractor’s
__call__() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to
Speech2Text2Tokenizer’s call(). Please refer to the doctsring of the above two
methods for more information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
cache_dir: typing.Union[str, os.PathLike, NoneType] = None
force_download: bool = False
local_files_only: bool = False
token: typing.Union[bool, str, NoneType] = None
revision: str = 'main'
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json.
**kwargs —
Additional keyword arguments passed along to both
from_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor
from_pretrained(), image processor
ImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the
methods above for more information.
save_pretrained
<
source
>
(
save_directory
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs (Dict[str, Any], optional) —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it
can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and
save_pretrained(). Please refer to the docstrings of the
methods above for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to Speech2Text2Tokenizer’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to Speech2Text2Tokenizer’s decode(). Please refer
to the docstring of this method for more information.
Speech2Text2ForCausalLM
class transformers.Speech2Text2ForCausalLM
<
source
>
(
config
)
Parameters
config (Speech2Text2Config) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Speech2Text2 Decoder with a language modeling head. Can be used as the decoder part of EncoderDecoderModel and SpeechEncoderDecoder.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using Speech2Text2Tokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Speech2Text2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import (
... SpeechEncoderDecoderModel,
... Speech2Text2ForCausalLM,
... Wav2Vec2Model,
... Speech2Text2Config,
... Wav2Vec2Config,
... Wav2Vec2FeatureExtractor,
... Speech2Text2Tokenizer,
... )
from datasets import load_dataset
feature_extractor = Wav2Vec2FeatureExtractor()
tokenizer = Speech2Text2Tokenizer.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
encoder = Wav2Vec2Model(Wav2Vec2Config())
decoder = Speech2Text2ForCausalLM(Speech2Text2Config())
# init random speech2text model
model = SpeechEncoderDecoderModel(encoder=encoder, decoder=decoder)
model.config.pad_token_id = tokenizer.pad_token_id
model.config.decoder_start_token_id = tokenizer.bos_token_id
# pre-process inputs and labels
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = feature_extractor(
... ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt"
... )
input_values = inputs.input_values
decoder_input_ids = tokenizer(ds[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(inputs=input_values, labels=decoder_input_ids).loss
# backprop loss
loss.backward()
←Speech2Text
SpeechT5→
Speech2Text2
Overview
Inference
Documentation resources
Speech2Text2Config
Speech2TextTokenizer
Speech2Text2Processor
Speech2Text2ForCausalLM
|
ImageGPT
Overview
The ImageGPT model was proposed in Generative Pretraining from Pixels by Mark
Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. ImageGPT (iGPT) is a GPT-2-like
model trained to predict the next pixel value, allowing for both unconditional and conditional image generation.
The abstract from the paper is the following:
Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models
can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels,
without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels,
we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and
low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide
ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. We are also
competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0%
top-1 accuracy on a linear probe of our features.
Summary of the approach. Taken from the [original paper](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf).
This model was contributed by nielsr, based on this issue. The original code can be found
here.
Tips:
ImageGPT is almost exactly the same as GPT-2, with the exception that a different activation
function is used (namely “quick gelu”), and the layer normalization layers don’t mean center the inputs. ImageGPT
also doesn’t have tied input- and output embeddings.
As the time- and memory requirements of the attention mechanism of Transformers scales quadratically in the sequence
length, the authors pre-trained ImageGPT on smaller input resolutions, such as 32x32 and 64x64. However, feeding a
sequence of 32x32x3=3072 tokens from 0..255 into a Transformer is still prohibitively large. Therefore, the authors
applied k-means clustering to the (R,G,B) pixel values with k=512. This way, we only have a 32*32 = 1024-long
sequence, but now of integers in the range 0..511. So we are shrinking the sequence length at the cost of a bigger
embedding matrix. In other words, the vocabulary size of ImageGPT is 512, + 1 for a special “start of sentence” (SOS)
token, used at the beginning of every sequence. One can use ImageGPTImageProcessor to prepare
images for the model.
Despite being pre-trained entirely unsupervised (i.e. without the use of any labels), ImageGPT produces fairly
performant image features useful for downstream tasks, such as image classification. The authors showed that the
features in the middle of the network are the most performant, and can be used as-is to train a linear model (such as
a sklearn logistic regression model for example). This is also referred to as “linear probing”. Features can be
easily obtained by first forwarding the image through the model, then specifying output_hidden_states=True, and
then average-pool the hidden states at whatever layer you like.
Alternatively, one can further fine-tune the entire model on a downstream dataset, similar to BERT. For this, you can
use ImageGPTForImageClassification.
ImageGPT comes in different sizes: there’s ImageGPT-small, ImageGPT-medium and ImageGPT-large. The authors did also
train an XL variant, which they didn’t release. The differences in size are summarized in the following table:
Model variant
Depths
Hidden sizes
Decoder hidden size
Params (M)
ImageNet-1k Top 1
MiT-b0
[2, 2, 2, 2]
[32, 64, 160, 256]
256
3.7
70.5
MiT-b1
[2, 2, 2, 2]
[64, 128, 320, 512]
256
14.0
78.7
MiT-b2
[3, 4, 6, 3]
[64, 128, 320, 512]
768
25.4
81.6
MiT-b3
[3, 4, 18, 3]
[64, 128, 320, 512]
768
45.2
83.1
MiT-b4
[3, 8, 27, 3]
[64, 128, 320, 512]
768
62.6
83.6
MiT-b5
[3, 6, 40, 3]
[64, 128, 320, 512]
768
82.0
83.8
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ImageGPT.
Image Classification
Demo notebooks for ImageGPT can be found here.
ImageGPTForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ImageGPTConfig
class transformers.ImageGPTConfig
<
source
>
(
vocab_size = 513
n_positions = 1024
n_embd = 512
n_layer = 24
n_head = 8
n_inner = None
activation_function = 'quick_gelu'
resid_pdrop = 0.1
embd_pdrop = 0.1
attn_pdrop = 0.1
layer_norm_epsilon = 1e-05
initializer_range = 0.02
scale_attn_weights = True
use_cache = True
tie_word_embeddings = False
scale_attn_by_inverse_layer_idx = False
reorder_and_upcast_attn = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 512) —
Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling ImageGPTModel or TFImageGPTModel.
n_positions (int, optional, defaults to 32*32) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (int, optional, defaults to 512) —
Dimensionality of the embeddings and hidden states.
n_layer (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
n_inner (int, optional, defaults to None) —
Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd
activation_function (str, optional, defaults to "quick_gelu") —
Activation function (can be one of the activation functions defined in src/transformers/activations.py).
Defaults to “quick_gelu”.
resid_pdrop (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (int, optional, defaults to 0.1) —
The dropout ratio for the embeddings.
attn_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the attention.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_attn_weights (bool, optional, defaults to True) —
Scale attention weights by dividing by sqrt(hidden_size)..
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
scale_attn_by_inverse_layer_idx (bool, optional, defaults to False) —
Whether to additionally scale attention weights by 1 / layer_idx + 1.
reorder_and_upcast_attn (bool, optional, defaults to False) —
Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention
dot-product/softmax to float() when training with mixed precision.
This is the configuration class to store the configuration of a ImageGPTModel or a TFImageGPTModel. It is
used to instantiate a GPT-2 model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the ImageGPT
openai/imagegpt-small architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ImageGPTConfig, ImageGPTModel
# Initializing a ImageGPT configuration
configuration = ImageGPTConfig()
# Initializing a model (with random weights) from the configuration
model = ImageGPTModel(configuration)
# Accessing the model configuration
configuration = model.config
ImageGPTFeatureExtractor
class transformers.ImageGPTFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
ImageGPTImageProcessor
class transformers.ImageGPTImageProcessor
<
source
>
(
clusters: typing.Union[typing.List[typing.List[int]], numpy.ndarray, NoneType] = None
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_normalize: bool = True
do_color_quantize: bool = True
**kwargs
)
Parameters
clusters (np.ndarray or List[List[int]], optional) —
The color clusters to use, of shape (n_clusters, 3) when color quantizing. Can be overriden by clusters
in preprocess.
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s dimensions to (size["height"], size["width"]). Can be overridden by
do_resize in preprocess.
size (Dict[str, int] optional, defaults to {"height" -- 256, "width": 256}):
Size of the image after resizing. Can be overridden by size in preprocess.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by resample in preprocess.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image pixel value to between [-1, 1]. Can be overridden by do_normalize in
preprocess.
do_color_quantize (bool, optional, defaults to True) —
Whether to color quantize the image. Can be overridden by do_color_quantize in preprocess.
Constructs a ImageGPT image processor. This image processor can be used to resize images to a smaller resolution
(such as 32x32 or 64x64), normalize them and finally color quantize them to obtain sequences of “pixel values”
(color clusters).
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_normalize: bool = None
do_color_quantize: typing.Optional[bool] = None
clusters: typing.Union[typing.List[typing.List[int]], numpy.ndarray, NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image
do_color_quantize (bool, optional, defaults to self.do_color_quantize) —
Whether to color quantize the image.
clusters (np.ndarray or List[List[int]], optional, defaults to self.clusters) —
Clusters used to quantize the image of shape (n_clusters, 3). Only has an effect if
do_color_quantize is set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Only has an effect if do_color_quantize is set to False.
Preprocess an image or batch of images.
ImageGPTModel
class transformers.ImageGPTModel
<
source
>
(
config: ImageGPTConfig
)
Parameters
config (ImageGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ImageGPT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs: typing.Any
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoImageProcessor. See ImageGPTImageProcessor.call() for details.
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ImageGPTConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The ImageGPTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, ImageGPTModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("openai/imagegpt-small")
model = ImageGPTModel.from_pretrained("openai/imagegpt-small")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ImageGPTForCausalImageModeling
class transformers.ImageGPTForCausalImageModeling
<
source
>
(
config: ImageGPTConfig
)
Parameters
config (ImageGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The ImageGPT Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs: typing.Any
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoImageProcessor. See ImageGPTImageProcessor.call() for details.
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ImageGPTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The ImageGPTForCausalImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, ImageGPTForCausalImageModeling
import torch
import matplotlib.pyplot as plt
import numpy as np
image_processor = AutoImageProcessor.from_pretrained("openai/imagegpt-small")
model = ImageGPTForCausalImageModeling.from_pretrained("openai/imagegpt-small")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# unconditional generation of 8 images
batch_size = 4
context = torch.full((batch_size, 1), model.config.vocab_size - 1) # initialize with SOS token
context = context.to(device)
output = model.generate(
... input_ids=context, max_length=model.config.n_positions + 1, temperature=1.0, do_sample=True, top_k=40
... )
clusters = image_processor.clusters
height = image_processor.size["height"]
width = image_processor.size["width"]
samples = output[:, 1:].cpu().detach().numpy()
samples_img = [
... np.reshape(np.rint(127.5 * (clusters[s] + 1.0)), [height, width, 3]).astype(np.uint8) for s in samples
... ] # convert color cluster tokens back to pixels
f, axes = plt.subplots(1, batch_size, dpi=300)
for img, ax in zip(samples_img, axes):
... ax.axis("off")
... ax.imshow(img)
ImageGPTForImageClassification
class transformers.ImageGPTForImageClassification
<
source
>
(
config: ImageGPTConfig
)
Parameters
config (ImageGPTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The ImageGPT Model transformer with an image classification head on top (linear layer).
ImageGPTForImageClassification average-pools the hidden states in order to do the classification.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs: typing.Any
)
→
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoImageProcessor. See ImageGPTImageProcessor.call() for details.
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ImageGPTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ImageGPTForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, ImageGPTForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("openai/imagegpt-small")
model = ImageGPTForImageClassification.from_pretrained("openai/imagegpt-small")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
←GLPN
LeViT→
ImageGPT
Overview
Resources
ImageGPTConfig
ImageGPTFeatureExtractor
ImageGPTImageProcessor
ImageGPTModel
ImageGPTForCausalImageModeling
ImageGPTForImageClassification
|
MBart and MBart-50
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview of MBart
The MBart model was presented in Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan
Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual
corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete
sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only
on the encoder, decoder, or reconstructing parts of the text.
This model was contributed by valhalla. The Authors’ code can be found here
Training of MBart
MBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for translation task. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The
target text format is [tgt_lang_code] X [eos]. bos is never used.
The regular call() will encode source text format passed as first argument or with the text
keyword, and target text format passed with the text_label keyword argument.
Supervised training
Copied
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro")
# forward pass
model(**inputs)
Generation
While generating the target text set the decoder_start_token_id to the target language id. The following
example shows how to translate English to Romanian using the facebook/mbart-large-en-ro model.
Copied
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX")
article = "UN Chief Says There Is No Military Solution in Syria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"])
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Şeful ONU declară că nu există o soluţie militară în Siria"
Overview of MBart-50
MBart-50 was introduced in the Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav
Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original mbart-large-cc25 checkpoint by extendeding
its embedding layers with randomly initialized vectors for an extra set of 25 language tokens and then pretrained on 50
languages.
According to the abstract
Multilingual translation models can be created through multilingual finetuning. Instead of finetuning on one
direction, a pretrained model is finetuned on many directions at the same time. It demonstrates that pretrained models
can be extended to incorporate additional languages without loss of performance. Multilingual finetuning improves on
average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while
improving 9.3 BLEU on average over bilingual baselines from scratch.
Training of MBart-50
The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix
for both source and target text i.e the text format is [lang_code] X [eos], where lang_code is source
language id for source text and target language id for target text, with X being the source or target text
respectively.
MBart-50 has its own tokenizer MBart50Tokenizer.
Supervised training
Copied
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
model(**model_inputs) # forward pass
Generation
To generate using the mBART-50 multilingual translation models, eos_token_id is used as the
decoder_start_token_id and the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the forced_bos_token_id parameter to the generate method.
The following example shows how to translate between Hindi to French and Arabic to English using the
facebook/mbart-50-large-many-to-many checkpoint.
Copied
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
# translate Hindi to French
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria."
# translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide
MBartConfig
class transformers.MBartConfig
<
source
>
(
vocab_size = 50265
max_position_embeddings = 1024
encoder_layers = 12
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 12
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
classifier_dropout = 0.0
scale_embedding = False
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
forced_eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the MBART model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling MBartModel or TFMBartModel.
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) —
The dropout ratio for classifier.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (int, optional, defaults to 2) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
This is the configuration class to store the configuration of a MBartModel. It is used to instantiate an MBART
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the MBART
facebook/mbart-large-cc25 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MBartConfig, MBartModel
# Initializing a MBART facebook/mbart-large-cc25 style configuration
configuration = MBartConfig()
# Initializing a model (with random weights) from the facebook/mbart-large-cc25 style configuration
model = MBartModel(configuration)
# Accessing the model configuration
configuration = model.config
MBartTokenizer
class transformers.MBartTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
tokenizer_file = None
src_lang = None
tgt_lang = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
additional_special_tokens = None
**kwargs
)
Construct an MBART tokenizer.
Adapted from RobertaTokenizer and XLNetTokenizer. Based on
SentencePiece.
The tokenization method is <tokens> <eos> <language code> for source language documents, and <language code> <tokens> <eos> for target language documents.
Examples:
Copied
from transformers import MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An MBART sequence has the following format, where X represents the sequence:
input_ids (for encoder) X [eos, src_lang_code]
decoder_input_ids: (for decoder) X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
MBartTokenizerFast
class transformers.MBartTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
src_lang = None
tgt_lang = None
additional_special_tokens = None
**kwargs
)
Construct a “fast” MBART tokenizer (backed by HuggingFace’s tokenizers library). Based on
BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
The tokenization method is <tokens> <eos> <language code> for source language documents, and <language code> <tokens> <eos> for target language documents.
Examples:
Copied
from transformers import MBartTokenizerFast
tokenizer = MBartTokenizerFast.from_pretrained(
... "facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO"
... )
example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. The special tokens depend on calling set_lang.
An MBART sequence has the following format, where X represents the sequence:
input_ids (for encoder) X [eos, src_lang_code]
decoder_input_ids: (for decoder) X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. mBART does not
make use of token type ids, therefore a list of zeros is returned.
set_src_lang_special_tokens
<
source
>
(
src_lang
)
Reset the special tokens to the source lang setting. No prefix and suffix=[eos, src_lang_code].
set_tgt_lang_special_tokens
<
source
>
(
lang: str
)
Reset the special tokens to the target language setting. No prefix and suffix=[eos, tgt_lang_code].
MBart50Tokenizer
class transformers.MBart50Tokenizer
<
source
>
(
vocab_file
src_lang = None
tgt_lang = None
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
src_lang (str, optional) —
A string representing the source language.
tgt_lang (str, optional) —
A string representing the target language.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Construct a MBart50 tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Examples:
Copied
from transformers import MBart50Tokenizer
tokenizer = MBart50Tokenizer.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
# model(**model_inputs) should work
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An MBART-50 sequence has the following format, where X represents the sequence:
input_ids (for encoder) [src_lang_code] X [eos]
labels: (for decoder) [tgt_lang_code] X [eos]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
set_src_lang_special_tokens
<
source
>
(
src_lang: str
)
Reset the special tokens to the source lang setting. prefix=[src_lang_code] and suffix=[eos].
set_tgt_lang_special_tokens
<
source
>
(
tgt_lang: str
)
Reset the special tokens to the target language setting. prefix=[tgt_lang_code] and suffix=[eos].
MBart50TokenizerFast
class transformers.MBart50TokenizerFast
<
source
>
(
vocab_file = None
src_lang = None
tgt_lang = None
tokenizer_file = None
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
src_lang (str, optional) —
A string representing the source language.
tgt_lang (str, optional) —
A string representing the target language.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
Construct a “fast” MBART tokenizer for mBART-50 (backed by HuggingFace’s tokenizers library). Based on
BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Examples:
Copied
from transformers import MBart50TokenizerFast
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
# model(**model_inputs) should work
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. The special tokens depend on calling set_lang.
An MBART-50 sequence has the following format, where X represents the sequence:
input_ids (for encoder) [src_lang_code] X [eos]
labels: (for decoder) [tgt_lang_code] X [eos]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
set_src_lang_special_tokens
<
source
>
(
src_lang: str
)
Reset the special tokens to the source lang setting. prefix=[src_lang_code] and suffix=[eos].
set_tgt_lang_special_tokens
<
source
>
(
tgt_lang: str
)
Reset the special tokens to the target language setting. prefix=[src_lang_code] and suffix=[eos].
MBartModel
class transformers.MBartModel
<
source
>
(
config: MBartConfig
)
Parameters
config (MBartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare MBART Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 25004 for en_XX, and 25003 for de_DE. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MBartModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MBartModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = MBartModel.from_pretrained("facebook/mbart-large-cc25")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
MBartForConditionalGeneration
class transformers.MBartForConditionalGeneration
<
source
>
(
config: MBartConfig
)
Parameters
config (MBartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The MBART Model with a language modeling head. Can be used for summarization, after fine-tuning the pretrained models.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 25004 for en_XX, and 25003 for de_DE. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MBartForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Translation example:
Copied
from transformers import AutoTokenizer, MBartForConditionalGeneration
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-en-ro")
example_english_phrase = "42 is the answer"
inputs = tokenizer(example_english_phrase, return_tensors="pt")
# Translate
generated_ids = model.generate(**inputs, num_beams=4, max_length=5)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'42 este răspuns'
Mask filling example:
Copied
from transformers import AutoTokenizer, MBartForConditionalGeneration
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
# de_DE is the language symbol id <LID> for German
TXT = "</s> Meine Freunde sind <mask> nett aber sie essen zu viel Kuchen. </s> de_DE"
input_ids = tokenizer([TXT], add_special_tokens=False, return_tensors="pt")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
tokenizer.decode(predictions).split()
['nett', 'sehr', 'ganz', 'nicht', 'so']
MBartForQuestionAnswering
class transformers.MBartForQuestionAnswering
<
source
>
(
config
)
Parameters
config (MBartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
MBART Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: Tensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 25004 for en_XX, and 25003 for de_DE. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MBartForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MBartForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = MBartForQuestionAnswering.from_pretrained("facebook/mbart-large-cc25")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
MBartForSequenceClassification
class transformers.MBartForSequenceClassification
<
source
>
(
config: MBartConfig
**kwargs
)
Parameters
config (MBartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
MBart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 25004 for en_XX, and 25003 for de_DE. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MBartForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, MBartForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = MBartForSequenceClassification.from_pretrained("facebook/mbart-large-cc25")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MBartForSequenceClassification.from_pretrained("facebook/mbart-large-cc25", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, MBartForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = MBartForSequenceClassification.from_pretrained("facebook/mbart-large-cc25", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = MBartForSequenceClassification.from_pretrained(
... "facebook/mbart-large-cc25", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
MBartForCausalLM
class transformers.MBartForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, MBartForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = MBartForCausalLM.from_pretrained("facebook/mbart-large-cc25", add_cross_attention=False)
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
list(logits.shape) == expected_shape
True
TFMBartModel
class transformers.TFMBartModel
<
source
>
(
*args
**kwargs
)
Parameters
config (MBartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MBART Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType = None
attention_mask: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
decoder_position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
decoder_head_mask: tf.Tensor | None = None
cross_attn_head_mask: tf.Tensor | None = None
encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None
past_key_values: Tuple[Tuple[tf.Tensor]] | None = None
inputs_embeds: tf.Tensor | None = None
decoder_inputs_embeds: tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 25004 for en_XX, and 25003 for de_DE. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MBartConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFMBartModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFMBartModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = TFMBartModel.from_pretrained("facebook/mbart-large-cc25")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFMBartForConditionalGeneration
class transformers.TFMBartForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (MBartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The MBART Model with a language modeling head. Can be used for summarization, after fine-tuning the pretrained models.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType = None
attention_mask: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
decoder_position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
decoder_head_mask: tf.Tensor | None = None
cross_attn_head_mask: tf.Tensor | None = None
encoder_outputs: Optional[TFBaseModelOutput] = None
past_key_values: Tuple[Tuple[tf.Tensor]] = None
inputs_embeds: tf.Tensor | None = None
decoder_inputs_embeds: tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MBart uses a specific language id token as the starting token for decoder_input_ids generation that
varies according to source and target language, e.g. 25004 for en_XX, and 25003 for de_DE. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (MBartConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFMBartForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Translation example:
Copied
from transformers import AutoTokenizer, TFMBartForConditionalGeneration
model = TFMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-en-ro")
example_english_phrase = "42 is the answer"
inputs = tokenizer(example_english_phrase, return_tensors="tf")
# Translate
generated_ids = model.generate(**inputs, num_beams=4, max_length=5)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'42 este răspuns'
Mask filling example:
Copied
from transformers import AutoTokenizer, TFMBartForConditionalGeneration
import tensorflow as tf
model = TFMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
# de_DE is the language symbol id <LID> for German
TXT = "</s> Meine Freunde sind <mask> nett aber sie essen zu viel Kuchen. </s> de_DE"
input_ids = tokenizer([TXT], add_special_tokens=False, return_tensors="tf")["input_ids"]
logits = model(input_ids).logits
masked_index = tf.where(input_ids[0] == tokenizer.mask_token_id)[0, 0]
probs = tf.nn.softmax(logits[0, masked_index], axis=0)
values, predictions = tf.math.top_k(probs, 5)
tokenizer.decode(predictions).split()
['nett', 'sehr', 'ganz', 'nicht', 'so']
FlaxMBartModel
class transformers.FlaxMBartModel
<
source
>
(
config: MBartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (MBartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare MBart Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxMBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartModel
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = FlaxMBartModel.from_pretrained("facebook/mbart-large-cc25")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
FlaxMBartForConditionalGeneration
class transformers.FlaxMBartForConditionalGeneration
<
source
>
(
config: MBartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (MBartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The MMBart Model with a language modeling head. Can be used for summarization.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxMBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration, MBartConfig
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
ARTICLE_TO_SUMMARIZE = "Meine Freunde sind cool, aber sie essen zu viel Kuchen."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="np")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=5).sequences
print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
Mask filling example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
# de_DE is the language symbol id <LID> for German
TXT = "</s> Meine Freunde sind <mask> nett aber sie essen zu viel Kuchen. </s> de_DE"
input_ids = tokenizer([TXT], add_special_tokens=False, return_tensors="np")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero()[0].item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
tokenizer.decode(predictions).split()
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
FlaxMBartForSequenceClassification
class transformers.FlaxMBartForSequenceClassification
<
source
>
(
config: MBartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (MBartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
MBart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxMBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = FlaxMBartForSequenceClassification.from_pretrained("facebook/mbart-large-cc25")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
FlaxMBartForQuestionAnswering
class transformers.FlaxMBartForQuestionAnswering
<
source
>
(
config: MBartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (MBartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
MBart Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MBartConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxMBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
model = FlaxMBartForQuestionAnswering.from_pretrained("facebook/mbart-large-cc25")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25")
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
←MarkupLM
MEGA→
MBart and MBart-50
Overview of MBart
Training of MBart
Overview of MBart-50
Training of MBart-50
Documentation resources
MBartConfig
MBartTokenizer
MBartTokenizerFast
MBart50Tokenizer
MBart50TokenizerFast
MBartModel
MBartForConditionalGeneration
MBartForQuestionAnswering
MBartForSequenceClassification
MBartForCausalLM
TFMBartModel
TFMBartForConditionalGeneration
FlaxMBartModel
FlaxMBartForConditionalGeneration
FlaxMBartForSequenceClassification
FlaxMBartForQuestionAnswering
|
ViTMSN
Overview
The ViTMSN model was proposed in Masked Siamese Networks for Label-Efficient Learning by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes,
Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. The paper presents a joint-embedding architecture to match the prototypes
of masked patches with that of the unmasked patches. With this setup, their method yields excellent performance in the low-shot and extreme low-shot
regimes.
The abstract from the paper is the following:
We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our
approach matches the representation of an image view containing randomly masked patches to the representation of the original
unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the
unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures,
while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance,
on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy,
and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark.
Tips:
MSN (masked siamese networks) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training
objective is to match the prototypes assigned to the unmasked views of the images to that of the masked views of the same images.
The authors have only released pre-trained weights of the backbone (ImageNet-1k pre-training). So, to use that on your own image classification dataset,
use the ViTMSNForImageClassification class which is initialized from ViTMSNModel. Follow
this notebook for a detailed tutorial on fine-tuning.
MSN is particularly useful in the low-shot and extreme low-shot regimes. Notably, it achieves 75.7% top-1 accuracy with only 1% of ImageNet-1K
labels when fine-tuned.
MSN architecture. Taken from the original paper.
This model was contributed by sayakpaul. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT MSN.
Image Classification
ViTMSNForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTMSNConfig
class transformers.ViTMSNConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-06
image_size = 224
patch_size = 16
num_channels = 3
qkv_bias = True
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-06) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
This is the configuration class to store the configuration of a ViTMSNModel. It is used to instantiate an ViT
MSN model according to the specified arguments, defining the model architecture. Instantiating a configuration with
the defaults will yield a similar configuration to that of the ViT
facebook/vit_msn_base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ViTMSNModel, ViTMSNConfig
# Initializing a ViT MSN vit-msn-base style configuration
configuration = ViTConfig()
# Initializing a model from the vit-msn-base style configuration
model = ViTMSNModel(configuration)
# Accessing the model configuration
configuration = model.config
ViTMSNModel
class transformers.ViTMSNModel
<
source
>
(
config: ViTMSNConfig
use_mask_token: bool = False
)
Parameters
config (ViTMSNConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ViTMSN Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
interpolate_pos_encoding: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTMSNConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViTMSNModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, ViTMSNModel
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/vit-msn-small")
model = ViTMSNModel.from_pretrained("facebook/vit-msn-small")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ViTMSNForImageClassification
class transformers.ViTMSNForImageClassification
<
source
>
(
config: ViTMSNConfig
)
Parameters
config (ViTMSNConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ViTMSN Model with an image classification head on top e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
interpolate_pos_encoding: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTMSNConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViTMSNForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, ViTMSNForImageClassification
import torch
from PIL import Image
import requests
torch.manual_seed(2)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/vit-msn-small")
model = ViTMSNForImageClassification.from_pretrained("facebook/vit-msn-small")
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
Kerry blue terrier
←ViTMAE
ViViT→
ViTMSN
Overview
Resources
ViTMSNConfig
ViTMSNModel
ViTMSNForImageClassification
|
BLOOM
Overview
The BLOOM model has been proposed with its various versions through the BigScience Workshop. BigScience is inspired by other open science initiatives where researchers have pooled their time and resources to collectively achieve a higher impact.
The architecture of BLOOM is essentially similar to GPT3 (auto-regressive model for next token prediction), but has been trained on 46 different languages and 13 programming languages.
Several smaller versions of the models have been trained on the same dataset. BLOOM is available in the following versions:
bloom-560m
bloom-1b1
bloom-1b7
bloom-3b
bloom-7b1
bloom (176B parameters)
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLOOM. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Generation
BloomForCausalLM is supported by this causal language modeling example script and notebook.
See also:
Causal language modeling task guide
Text classification task guide
Token classification task guide
Question answering task guide
⚡️ Inference
A blog on Optimization story: Bloom inference.
A blog on Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate.
⚙️ Training
A blog on The Technology Behind BLOOM Training.
BloomConfig
class transformers.BloomConfig
<
source
>
(
vocab_size = 250880
hidden_size = 64
n_layer = 2
n_head = 8
layer_norm_epsilon = 1e-05
initializer_range = 0.02
use_cache = True
bos_token_id = 1
eos_token_id = 2
apply_residual_connection_post_layernorm = False
hidden_dropout = 0.0
attention_dropout = 0.0
pretraining_tp = 1
slow_but_exact = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 250880) —
Vocabulary size of the Bloom model. Defines the maximum number of different tokens that can be represented
by the inputs_ids passed when calling BloomModel. Check this
discussion on how the
vocab_size has been defined.
hidden_size (int, optional, defaults to 64) —
Dimensionality of the embeddings and hidden states.
n_layer (int, optional, defaults to 2) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
apply_residual_connection_post_layernorm (bool, optional, defaults to False) —
If enabled, use the layer norm of the hidden states as the residual in the transformer blocks
hidden_dropout (float, optional, defaults to 0.1) —
Dropout rate of the dropout function on the bias dropout.
attention_dropout (float, optional, defaults to 0.1) —
Dropout rate applied to the attention probs
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
pretraining_tp (int, optional, defaults to 1) —
Experimental feature. Tensor parallelism rank used during pretraining with Megatron. Please refer to this
document to understand more about it. This value is
necessary to ensure exact reproducibility of the pretraining results. Please refer to this
issue. Note also that this is enabled only when
slow_but_exact=True.
slow_but_exact (bool, optional, defaults to False) —
Experimental feature. Whether to use slow but exact implementation of the attention mechanism. While
merging the TP rank tensors, due to slicing operations the results may be slightly different between the
model trained on Megatron and our model. Please refer to this
issue. A solution to obtain more accurate results is to
enable this feature. Enabling this will hurt the computational time of the inference. Will be probably
resolved in the future once the main model has been fine-tuned with TP_rank=1.
This is the configuration class to store the configuration of a BloomModel. It is used to instantiate a Bloom
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to the Bloom architecture
bigscience/bloom.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BloomConfig, BloomModel
# Initializing a Bloom configuration
configuration = BloomConfig()
# Initializing a model (with random weights) from the configuration
model = BloomModel(configuration)
# Accessing the model configuration
configuration = model.config
BloomModel
class transformers.BloomModel
<
source
>
(
config: BloomConfig
)
Parameters
config (BloomConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Bloom Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**deprecated_arguments
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
Each element of past_key_values is a tuple (past_key, past_value):
past_key: [batch_size * num_heads, head_dim, kv_length]
past_value: [batch_size * num_heads, kv_length, head_dim]
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BloomConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The BloomModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BloomModel
import torch
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model = BloomModel.from_pretrained("bigscience/bloom-560m")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
BloomTokenizerFast
class transformers.BloomTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
unk_token = '<unk>'
bos_token = '<s>'
eos_token = '</s>'
pad_token = '<pad>'
add_prefix_space = False
clean_up_tokenization_spaces = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to <|endoftext|>) —
The beginning of sequence token.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Bloom tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
Construct a “fast” Bloom tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import BloomTokenizerFast
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")
tokenizer("Hello world")["input_ids"]
[59414, 8876]
tokenizer(" Hello world")["input_ids"]
[86153, 8876]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since
the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
BloomForCausalLM
class transformers.BloomForCausalLM
<
source
>
(
config: BloomConfig
)
Parameters
config (BloomConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**deprecated_arguments
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
Each element of past_key_values is a tuple (past_key, past_value):
past_key: [batch_size * num_heads, head_dim, kv_length]
past_value: [batch_size * num_heads, kv_length, head_dim]
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BloomConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The BloomForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, BloomForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model = BloomForCausalLM.from_pretrained("bigscience/bloom-560m")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
BloomForSequenceClassification
class transformers.BloomForSequenceClassification
<
source
>
(
config: BloomConfig
)
Parameters
config (BloomConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Bloom Model transformer with a sequence classification head on top (linear layer).
BloomForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**deprecated_arguments
)
→
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
Each element of past_key_values is a tuple (past_key, past_value):
past_key: [batch_size * num_heads, head_dim, kv_length]
past_value: [batch_size * num_heads, kv_length, head_dim]
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BloomConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BloomForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, BloomForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model = BloomForSequenceClassification.from_pretrained("bigscience/bloom-560m")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BloomForSequenceClassification.from_pretrained("bigscience/bloom-560m", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, BloomForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model = BloomForSequenceClassification.from_pretrained("bigscience/bloom-560m", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BloomForSequenceClassification.from_pretrained(
... "bigscience/bloom-560m", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
BloomForTokenClassification
class transformers.BloomForTokenClassification
<
source
>
(
config: BloomConfig
)
Parameters
config (BloomConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bloom Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**deprecated_arguments
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
Each element of past_key_values is a tuple (past_key, past_value):
past_key: [batch_size * num_heads, head_dim, kv_length]
past_value: [batch_size * num_heads, kv_length, head_dim]
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BloomConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BloomForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BloomForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
model = BloomForTokenClassification.from_pretrained("bigscience/bloom-560m")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
BloomForQuestionAnswering
class transformers.BloomForQuestionAnswering
<
source
>
(
config
)
Parameters
config (BloomConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The BLOOM Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
Each element of past_key_values is a tuple (past_key, past_value):
past_key: [batch_size * num_heads, head_dim, kv_length]
past_value: [batch_size * num_heads, kv_length, head_dim]
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
The BloomForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←Blenderbot Small
BORT→
BLOOM
Overview
Resources
BloomConfig
BloomModel
BloomTokenizerFast
BloomForCausalLM
BloomForSequenceClassification
BloomForTokenClassification
BloomForQuestionAnswering
|
DePlot
Overview
DePlot was proposed in the paper DePlot: One-shot visual language reasoning by plot-to-table translation from Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
The abstract of the paper states the following:
Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
Model description
DePlot is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation.
DePlot is a Visual Question Answering subset of Pix2Struct architecture. It renders the input question on the image and predicts the answer.
Usage
Currently one checkpoint is available for DePlot:
google/deplot: DePlot fine-tuned on ChartQA dataset
Copied
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot")
processor = AutoProcessor.from_pretrained("google/deplot")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt")
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
Fine-tuning
To fine-tune DePlot, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faster convergence:
Copied
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
←Data2Vec
Donut→
DePlot
Overview
Model description
Usage
Fine-tuning
|
ERNIE
Overview
ERNIE is a series of powerful models proposed by baidu, especially in Chinese tasks,
including [ERNIE1.0](https://arxiv.org/abs/1904.09223), [ERNIE2.0](https://ojs.aaai.org/index.php/AAAI/article/view/6428),
[ERNIE3.0](https://arxiv.org/abs/2107.02137), [ERNIE-Gram](https://arxiv.org/abs/2010.12148), [ERNIE-health](https://arxiv.org/abs/2110.07244), etc.
These models are contributed by nghuyong and the official code can be found in PaddleNLP (in PaddlePaddle).
How to use
Take `ernie-1.0-base-zh` as an example:
Copied
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = AutoModel.from_pretrained("nghuyong/ernie-1.0-base-zh")
Supported Models
Model Name
Language
Description
ernie-1.0-base-zh
Chinese
Layer:12, Heads:12, Hidden:768
ernie-2.0-base-en
English
Layer:12, Heads:12, Hidden:768
ernie-2.0-large-en
English
Layer:24, Heads:16, Hidden:1024
ernie-3.0-base-zh
Chinese
Layer:12, Heads:12, Hidden:768
ernie-3.0-medium-zh
Chinese
Layer:6, Heads:12, Hidden:768
ernie-3.0-mini-zh
Chinese
Layer:6, Heads:12, Hidden:384
ernie-3.0-micro-zh
Chinese
Layer:4, Heads:12, Hidden:384
ernie-3.0-nano-zh
Chinese
Layer:4, Heads:12, Hidden:312
ernie-health-zh
Chinese
Layer:12, Heads:12, Hidden:768
ernie-gram-zh
Chinese
Layer:12, Heads:12, Hidden:768
You can find all the supported models from huggingface’s model hub: huggingface.co/nghuyong, and model details from paddle’s official
repo: PaddleNLP
and ERNIE.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
ErnieConfig
class transformers.ErnieConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
task_type_vocab_size = 3
use_task_id = False
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the ERNIE model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling ErnieModel or TFErnieModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling ErnieModel or TFErnieModel.
task_type_vocab_size (int, optional, defaults to 3) —
The vocabulary size of the task_type_ids for ERNIE2.0/ERNIE3.0 model
use_task_id (bool, optional, defaults to False) —
Whether or not the model support task_type_ids
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a ErnieModel or a TFErnieModel. It is used to
instantiate a ERNIE model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the ERNIE
nghuyong/ernie-3.0-base-zh architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import ErnieConfig, ErnieModel
# Initializing a ERNIE nghuyong/ernie-3.0-base-zh style configuration
configuration = ErnieConfig()
# Initializing a model (with random weights) from the nghuyong/ernie-3.0-base-zh style configuration
model = ErnieModel(configuration)
# Accessing the model configuration
configuration = model.config
Ernie specific outputs
class transformers.models.ernie.modeling_ernie.ErnieForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
prediction_logits: FloatTensor = None
seq_relationship_logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of ErnieForPreTraining.
ErnieModel
class transformers.ErnieModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Ernie Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The ErnieModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieModel
import torch
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = ErnieModel.from_pretrained("nghuyong/ernie-1.0-base-zh")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ErnieForPreTraining
class transformers.ErnieForPreTraining
<
source
>
(
config
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Ernie Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
next_sentence_label: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.ernie.modeling_ernie.ErnieForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional):
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (torch.LongTensor of shape (batch_size,), optional):
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence
pair (see input_ids docstring) Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
kwargs (Dict[str, any], optional, defaults to {}):
Used to hide legacy arguments that have been deprecated.
Returns
transformers.models.ernie.modeling_ernie.ErnieForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.ernie.modeling_ernie.ErnieForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ErnieForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = ErnieForPreTraining.from_pretrained("nghuyong/ernie-1.0-base-zh")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
ErnieForCausalLM
class transformers.ErnieForCausalLM
<
source
>
(
config
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Ernie Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.Tensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The ErnieForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, ErnieForCausalLM
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = ErnieForCausalLM.from_pretrained("nghuyong/ernie-1.0-base-zh")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
ErnieForMaskedLM
class transformers.ErnieForMaskedLM
<
source
>
(
config
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Ernie Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ErnieForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = ErnieForMaskedLM.from_pretrained("nghuyong/ernie-1.0-base-zh")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
'paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.88
ErnieForNextSentencePrediction
class transformers.ErnieForNextSentencePrediction
<
source
>
(
config
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Ernie Model with a next sentence prediction (classification) head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair
(see input_ids docstring). Indices should be in [0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
Returns
transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss.
logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ErnieForNextSentencePrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieForNextSentencePrediction
import torch
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = ErnieForNextSentencePrediction.from_pretrained("nghuyong/ernie-1.0-base-zh")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
next_sentence = "The sky is blue due to the shorter wavelength of blue light."
encoding = tokenizer(prompt, next_sentence, return_tensors="pt")
outputs = model(**encoding, labels=torch.LongTensor([1]))
logits = outputs.logits
assert logits[0, 0] < logits[0, 1] # next sentence was random
ErnieForSequenceClassification
class transformers.ErnieForSequenceClassification
<
source
>
(
config
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Ernie Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
The ErnieForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
ErnieForMultipleChoice
class transformers.ErnieForMultipleChoice
<
source
>
(
config
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Ernie Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ErnieConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ErnieForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ErnieForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = ErnieForMultipleChoice.from_pretrained("nghuyong/ernie-1.0-base-zh")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
ErnieForTokenClassification
class transformers.ErnieForTokenClassification
<
source
>
(
config
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Ernie Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
The ErnieForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
ErnieForQuestionAnswering
class transformers.ErnieForQuestionAnswering
<
source
>
(
config
)
Parameters
config (ErnieConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Ernie Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
task_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Task type embedding is a special embedding to represent the characteristic of different tasks, such as
word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We
assign a task_type_id to each task and the task_type_id is in the range `[0,
config.task_type_vocab_size-1]
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
The ErnieForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←Encoder Decoder Models
ErnieM→
ERNIE
Overview
How to use
Supported Models
Documentation resources
ErnieConfig
Ernie specific outputs
ErnieModel
ErnieForPreTraining
ErnieForCausalLM
ErnieForMaskedLM
ErnieForNextSentencePrediction
ErnieForSequenceClassification
ErnieForMultipleChoice
ErnieForTokenClassification
ErnieForQuestionAnswering
|
Encoder Decoder Models
Overview
The EncoderDecoderModel can be used to initialize a sequence-to-sequence model with any
pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks
was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by
Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
After such an EncoderDecoderModel has been trained/fine-tuned, it can be saved/loaded just like
any other models (see the examples for more information).
An application of this architecture could be to leverage two pretrained BertModel as the encoder
and decoder for a summarization model as was shown in: Text Summarization with Pretrained Encoders by Yang Liu and Mirella Lapata.
Randomly initializing EncoderDecoderModel from model configurations.
EncoderDecoderModel can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default BertModel configuration for the encoder and the default BertForCausalLM configuration for the decoder.
Copied
from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
config_encoder = BertConfig()
config_decoder = BertConfig()
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = EncoderDecoderModel(config=config)
Initialising EncoderDecoderModel from a pretrained encoder and a pretrained decoder.
EncoderDecoderModel can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, e.g. BERT, can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing EncoderDecoderModel from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post.
To do so, the EncoderDecoderModel class provides a EncoderDecoderModel.from_encoder_decoder_pretrained() method.
Copied
from transformers import EncoderDecoderModel, BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
Loading an existing EncoderDecoderModel checkpoint and perform inference.
To load fine-tuned checkpoints of the EncoderDecoderModel class, EncoderDecoderModel provides the from_pretrained(...) method just like any other model architecture in Transformers.
To perform inference, one uses the generate method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
Copied
from transformers import AutoTokenizer, EncoderDecoderModel
# load a fine-tuned seq2seq model and corresponding tokenizer
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail")
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail")
# let's perform inference on a long piece of text
ARTICLE_TO_SUMMARIZE = (
... "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
... )
input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids
# autoregressively generate summary (uses greedy decoding by default)
generated_ids = model.generate(input_ids)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow.
Loading a PyTorch checkpoint into TFEncoderDecoderModel.
TFEncoderDecoderModel.from_pretrained() currently doesn’t support initializing the model from a
pytorch checkpoint. Passing from_pt=True to this method will throw an exception. If there are only pytorch
checkpoints for a particular encoder-decoder model, a workaround is:
Copied
# a workaround to load from pytorch checkpoint
from transformers import EncoderDecoderModel, TFEncoderDecoderModel
_model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
_model.encoder.save_pretrained("./encoder")
_model.decoder.save_pretrained("./decoder")
model = TFEncoderDecoderModel.from_encoder_decoder_pretrained(
... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True
... )
# This is only for copying some specific attributes of this particular model.
model.config = _model.config
Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model.
As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the
input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded
target sequence).
Copied
from transformers import BertTokenizer, EncoderDecoderModel
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
input_ids = tokenizer(
... "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.",
... return_tensors="pt",
... ).input_ids
labels = tokenizer(
... "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris.",
... return_tensors="pt",
... ).input_ids
# the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
Detailed colab for training.
This model was contributed by thomwolf. This model’s TensorFlow and Flax versions
were contributed by ydshieh.
EncoderDecoderConfig
class transformers.EncoderDecoderConfig
<
source
>
(
**kwargs
)
Parameters
kwargs (optional) —
Dictionary of keyword arguments. Notably:
encoder (PretrainedConfig, optional) — An instance of a configuration object that defines
the encoder config.
decoder (PretrainedConfig, optional) — An instance of a configuration object that defines
the decoder config.
EncoderDecoderConfig is the configuration class to store the configuration of a EncoderDecoderModel. It is
used to instantiate an Encoder Decoder model according to the specified arguments, defining the encoder and decoder
configs.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
# Initializing a BERT bert-base-uncased style configuration
config_encoder = BertConfig()
config_decoder = BertConfig()
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
# Initializing a Bert2Bert model from the bert-base-uncased style configurations
model = EncoderDecoderModel(config=config)
# Accessing the model configuration
config_encoder = model.config.encoder
config_decoder = model.config.decoder
# set decoder config to causal lm
config_decoder.is_decoder = True
config_decoder.add_cross_attention = True
# Saving the model, including its configuration
model.save_pretrained("my-model")
# loading model and config from pretrained folder
encoder_decoder_config = EncoderDecoderConfig.from_pretrained("my-model")
model = EncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config)
from_encoder_decoder_configs
<
source
>
(
encoder_config: PretrainedConfig
decoder_config: PretrainedConfig
**kwargs
)
→
EncoderDecoderConfig
Returns
EncoderDecoderConfig
An instance of a configuration object
Instantiate a EncoderDecoderConfig (or a derived class) from a pre-trained encoder model configuration and
decoder model configuration.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict() from PretrainedConfig.
EncoderDecoderModel
class transformers.EncoderDecoderModel
<
source
>
(
config: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None
encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
decoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
)
Parameters
config (EncoderDecoderConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the
encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via
from_pretrained() function and the decoder is loaded via from_pretrained()
function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.
After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models
(see the examples for more information).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
EncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with one
of the base model classes of the library as encoder and another one as decoder when created with the
:meth~transformers.AutoModel.from_pretrained class method for the encoder and
:meth~transformers.AutoModelForCausalLM.from_pretrained class method for the decoder.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
For training, decoder_input_ids are automatically created by the model by shifting the labels to the
right, replacing -100 by the pad_token_id and prepending them with the decoder_start_token_id.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
encoder_outputs (tuple(torch.FloatTensor), optional) —
This tuple must consist of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) is a tensor
of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the
decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. This is useful if you want more control over how to convert decoder_input_ids indices
into associated vectors than the model’s internal embedding lookup matrix.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss for the decoder. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
If set to True, the model will return a ~utils.Seq2SeqLMOutput instead of a plain tuple.
kwargs (optional) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:
Without a prefix which will be input as **encoder_kwargs for the encoder forward function.
With a decoder_ prefix which will be input as **decoder_kwargs for the decoder forward function.
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EncoderDecoderConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The EncoderDecoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import EncoderDecoderModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = EncoderDecoderModel.from_encoder_decoder_pretrained(
... "bert-base-uncased", "bert-base-uncased"
... ) # initialize Bert2Bert from pre-trained checkpoints
# training
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
model.config.vocab_size = model.config.decoder.vocab_size
input_ids = tokenizer("This is a really long text", return_tensors="pt").input_ids
labels = tokenizer("This is the corresponding summary", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss, logits = outputs.loss, outputs.logits
# save and load from pretrained
model.save_pretrained("bert2bert")
model = EncoderDecoderModel.from_pretrained("bert2bert")
# generation
generated = model.generate(input_ids)
from_encoder_decoder_pretrained
<
source
>
(
encoder_pretrained_model_name_or_path: str = None
decoder_pretrained_model_name_or_path: str = None
*model_args
**kwargs
)
Parameters
encoder_pretrained_model_name_or_path (str, optional) —
Information necessary to initiate the encoder. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing model weights saved using
save_pretrained(), e.g., ./my_model_directory/.
A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In
this case, from_tf should be set to True and a configuration object should be provided as
config argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
decoder_pretrained_model_name_or_path (str, optional, defaults to None) —
Information necessary to initiate the decoder. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing model weights saved using
save_pretrained(), e.g., ./my_model_directory/.
A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In
this case, from_tf should be set to True and a configuration object should be provided as
config argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
model_args (remaining positional arguments, optional) —
All remaining positional arguments will be passed to the underlying model’s __init__ method.
kwargs (remaining dictionary of keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True).
To update the encoder configuration, use the prefix encoder_ for each configuration parameter.
To update the decoder configuration, use the prefix decoder_ for each configuration parameter.
To update the parent model configuration, do not use a prefix for each configuration parameter.
Behaves differently depending on whether a config is provided or automatically loaded.
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.
The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train
the model, you need to first set it back in training mode with model.train().
Example:
Copied
from transformers import EncoderDecoderModel
# initialize a bert2bert from two pretrained BERT models. Note that the cross-attention layers will be randomly initialized
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
# saving model after fine-tuning
model.save_pretrained("./bert2bert")
# load fine-tuned model
model = EncoderDecoderModel.from_pretrained("./bert2bert")
TFEncoderDecoderModel
class transformers.TFEncoderDecoderModel
<
source
>
(
*args
**kwargs
)
Parameters
config (EncoderDecoderConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the
encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via
from_pretrained() function and the decoder is loaded via from_pretrained()
function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.
After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models
(see the examples for more information).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TFEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with one
of the base model classes of the library as encoder and another one as decoder when created with the
from_pretrained() class method for the encoder and from_pretrained() class
method for the decoder.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: np.ndarray | tf.Tensor | None = None
past_key_values: Tuple[Tuple[tf.Tensor]] | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (np.ndarray or tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
Provide for sequence to sequence training to the decoder. Indices can be obtained using
PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for
details.
decoder_attention_mask (np.ndarray or tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
encoder_outputs (tuple(tuple(tf.Tensor), optional) —
This tuple must consist of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) is a tensor of hidden-states at the output
of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(tf.Tensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. This is useful if you want more control over how to convert decoder_input_ids indices
into associated vectors than the model’s internal embedding lookup matrix.
labels (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss for the decoder. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
If set to True, the model will return a ~utils.Seq2SeqLMOutput instead of a plain tuple.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
kwargs (optional) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:
Without a prefix which will be input as **encoder_kwargs for the encoder forward function.
With a decoder_ prefix which will be input as `**decoder_kwargs“ for the decoder forward function.
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (EncoderDecoderConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFEncoderDecoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import TFEncoderDecoderModel, BertTokenizer
# initialize a bert2gpt2 from a pretrained BERT and GPT2 models. Note that the cross-attention layers will be randomly initialized
model = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2")
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
# forward
input_ids = tokenizer.encode(
... "Hello, my dog is cute", add_special_tokens=True, return_tensors="tf"
... ) # Batch size 1
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)
# training
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)
loss, logits = outputs.loss, outputs.logits
# save and load from pretrained
model.save_pretrained("bert2gpt2")
model = TFEncoderDecoderModel.from_pretrained("bert2gpt2")
# generation
generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.bos_token_id)
from_encoder_decoder_pretrained
<
source
>
(
encoder_pretrained_model_name_or_path: str = None
decoder_pretrained_model_name_or_path: str = None
*model_args
**kwargs
)
Parameters
encoder_pretrained_model_name_or_path (str, optional) —
Information necessary to initiate the encoder. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing model weights saved using
save_pretrained(), e.g., ./my_model_directory/.
A path or url to a pytorch index checkpoint file (e.g, ./pt_model/). In this case,
encoder_from_pt should be set to True.
decoder_pretrained_model_name_or_path (str, optional, defaults to None) —
Information necessary to initiate the decoder. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing model weights saved using
save_pretrained(), e.g., ./my_model_directory/.
A path or url to a pytorch checkpoint file (e.g, ./pt_model/). In this case,
decoder_from_pt should be set to True.
model_args (remaining positional arguments, optional) —
All remaning positional arguments will be passed to the underlying model’s __init__ method.
kwargs (remaining dictionary of keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True).
To update the encoder configuration, use the prefix encoder_ for each configuration parameter.
To update the decoder configuration, use the prefix decoder_ for each configuration parameter.
To update the parent model configuration, do not use a prefix for each configuration parameter.
Behaves differently depending on whether a config is provided or automatically loaded.
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.
Example:
Copied
from transformers import TFEncoderDecoderModel
# initialize a bert2gpt2 from two pretrained BERT models. Note that the cross-attention layers will be randomly initialized
model = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "gpt2")
# saving model after fine-tuning
model.save_pretrained("./bert2gpt2")
# load fine-tuned model
model = TFEncoderDecoderModel.from_pretrained("./bert2gpt2")
FlaxEncoderDecoderModel
class transformers.FlaxEncoderDecoderModel
<
source
>
(
config: EncoderDecoderConfig
input_shape: typing.Optional[typing.Tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (EncoderDecoderConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the
encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via
from_pretrained() function and the decoder is loaded via from_pretrained()
function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
Zhou, Wei Li, Peter J. Liu.
After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models
(see the examples for more information).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
FlaxEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with
the module (flax.nn.Module) of one of the base model classes of the library as encoder module and another one as
decoder module when created with the :meth~transformers.FlaxAutoModel.from_pretrained class method for the
encoder and :meth~transformers.FlaxAutoModelForCausalLM.from_pretrained class method for the decoder.
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For sequence to sequence training, decoder_input_ids should be provided. decoder_input_ids should be
created outside of the model by shifting the labels to the right, replacing -100 by the pad_token_id
and prepending them with the decoder_start_token_id.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.encoder.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.decoder.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
If set to True, the model will return a ~utils.FlaxSeq2SeqLMOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EncoderDecoderConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxEncoderDecoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import FlaxEncoderDecoderModel, BertTokenizer, GPT2Tokenizer
# load a fine-tuned bert2gpt2 model
model = FlaxEncoderDecoderModel.from_pretrained("patrickvonplaten/bert2gpt2-cnn_dailymail-fp16")
# load input & output tokenizer
tokenizer_input = BertTokenizer.from_pretrained("bert-base-cased")
tokenizer_output = GPT2Tokenizer.from_pretrained("gpt2")
article = '''Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members
singing a racist chant. SAE's national chapter suspended the students,
but University of Oklahoma President David Boren took it a step further,
saying the university's affiliation with the fraternity is permanently done.'''
input_ids = tokenizer_input(article, add_special_tokens=True, return_tensors="np").input_ids
# use GPT2's eos_token as the pad as well as eos token
model.config.eos_token_id = model.config.decoder.eos_token_id
model.config.pad_token_id = model.config.eos_token_id
sequences = model.generate(input_ids, num_beams=4, max_length=12).sequences
summary = tokenizer_output.batch_decode(sequences, skip_special_tokens=True)[0]
assert summary == "SAS Alpha Epsilon suspended Sigma Alpha Epsilon members"
from_encoder_decoder_pretrained
<
source
>
(
encoder_pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] = None
decoder_pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] = None
*model_args
**kwargs
)
Parameters
encoder_pretrained_model_name_or_path (Union[str, os.PathLike], optional) —
Information necessary to initiate the encoder. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing model weights saved using
save_pretrained(), e.g., ./my_model_directory/.
decoder_pretrained_model_name_or_path (Union[str, os.PathLike], optional, defaults to None) —
Information necessary to initiate the decoder. Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased.
A path to a directory containing model weights saved using
save_pretrained(), e.g., ./my_model_directory/.
model_args (remaining positional arguments, optional) —
All remaning positional arguments will be passed to the underlying model’s __init__ method.
kwargs (remaining dictionary of keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True).
To update the encoder configuration, use the prefix encoder_ for each configuration parameter.
To update the decoder configuration, use the prefix decoder_ for each configuration parameter.
To update the parent model configuration, do not use a prefix for each configuration parameter.
Behaves differently depending on whether a config is provided or automatically loaded.
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
checkpoints.
Example:
Copied
from transformers import FlaxEncoderDecoderModel
# initialize a bert2gpt2 from pretrained BERT and GPT2 models. Note that the cross-attention layers will be randomly initialized
model = FlaxEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2")
# saving model after fine-tuning
model.save_pretrained("./bert2gpt2")
# load fine-tuned model
model = FlaxEncoderDecoderModel.from_pretrained("./bert2gpt2")
←ELECTRA
ERNIE→
Encoder Decoder Models
Overview
Randomly initializing `EncoderDecoderModel` from model configurations.
Initialising `EncoderDecoderModel` from a pretrained encoder and a pretrained decoder.
Loading an existing `EncoderDecoderModel` checkpoint and perform inference.
Loading a PyTorch checkpoint into `TFEncoderDecoderModel`.
Training
EncoderDecoderConfig
EncoderDecoderModel
TFEncoderDecoderModel
FlaxEncoderDecoderModel
|
BART
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview
The Bart model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation,
Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019.
According to the abstract,
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a
left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme,
where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It
matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new
state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains
of up to 6 ROUGE.
Tips:
BART is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder:
mask random tokens (like in BERT)
delete random tokens
mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token)
permute sentences
rotate the document to make it start at a specific token
This model was contributed by sshleifer. The Authors’ code can be found here.
Examples
Examples and scripts for fine-tuning BART and other models for sequence to sequence tasks can be found in
examples/pytorch/summarization/.
An example of how to train BartForConditionalGeneration with a Hugging Face datasets
object can be found in this forum discussion.
Distilled checkpoints are described in this paper.
Implementation Notes
Bart doesn’t use token_type_ids for sequence classification. Use BartTokenizer or
encode() to get the proper splitting.
The forward pass of BartModel will create the decoder_input_ids if they are not passed.
This is different than some other modeling APIs. A typical use case of this feature is mask filling.
Model predictions are intended to be identical to the original implementation when
forced_bos_token_id=0. This only works, however, if the string you pass to
fairseq.encode starts with a space.
generate() should be used for conditional generation tasks like
summarization, see the example in that docstrings.
Models that load the facebook/bart-large-cnn weights will not have a mask_token_id, or be able to perform
mask-filling tasks.
Mask Filling
The facebook/bart-base and facebook/bart-large checkpoints can be used to fill multi-token masks.
Copied
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == [
"UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria"
]
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BART. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Summarization
A blog post on Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker.
A notebook on how to finetune BART for summarization with fastai using blurr. 🌎
A notebook on how to finetune BART for summarization in two languages with Trainer class. 🌎
BartForConditionalGeneration is supported by this example script and notebook.
TFBartForConditionalGeneration is supported by this example script and notebook.
FlaxBartForConditionalGeneration is supported by this example script.
Summarization chapter of the 🤗 Hugging Face course.
Summarization task guide
Fill-Mask
BartForConditionalGeneration is supported by this example script and notebook.
TFBartForConditionalGeneration is supported by this example script and notebook.
FlaxBartForConditionalGeneration is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
Translation
A notebook on how to finetune mBART using Seq2SeqTrainer for Hindi to English translation. 🌎
BartForConditionalGeneration is supported by this example script and notebook.
TFBartForConditionalGeneration is supported by this example script and notebook.
Translation task guide
See also:
Text classification task guide
Question answering task guide
Causal language modeling task guide
BartConfig
class transformers.BartConfig
<
source
>
(
vocab_size = 50265
max_position_embeddings = 1024
encoder_layers = 12
encoder_ffn_dim = 4096
encoder_attention_heads = 16
decoder_layers = 12
decoder_ffn_dim = 4096
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
activation_function = 'gelu'
d_model = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
classifier_dropout = 0.0
scale_embedding = False
use_cache = True
num_labels = 3
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
is_encoder_decoder = True
decoder_start_token_id = 2
forced_eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the BART model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling BartModel or TFBartModel.
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
classifier_dropout (float, optional, defaults to 0.0) —
The dropout ratio for classifier.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
num_labels (int, optional, defaults to 3) —
The number of labels to use in BartForSequenceClassification.
forced_eos_token_id (int, optional, defaults to 2) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
This is the configuration class to store the configuration of a BartModel. It is used to instantiate a BART
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the BART
facebook/bart-large architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BartConfig, BartModel
# Initializing a BART facebook/bart-large style configuration
configuration = BartConfig()
# Initializing a model (with random weights) from the facebook/bart-large style configuration
model = BartModel(configuration)
# Accessing the model configuration
configuration = model.config
BartTokenizer
class transformers.BartTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (BART tokenizer detect beginning of words by the preceding space).
Constructs a BART tokenizer, which is smilar to the ROBERTa tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import BartTokenizer
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BART sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BART does not
make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
BartTokenizerFast
class transformers.BartTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
trim_offsets = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (BART tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether the post processing step should trim offsets to avoid including whitespaces.
Construct a “fast” BART tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2 tokenizer,
using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import BartTokenizerFast
tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-base")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BART does not
make use of token type ids, therefore a list of zeros is returned.
BartModel
class transformers.BartModel
<
source
>
(
config: BartConfig
)
Parameters
config (BartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare BART Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_bart._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BartModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BartModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
model = BartModel.from_pretrained("facebook/bart-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
BartForConditionalGeneration
class transformers.BartForConditionalGeneration
<
source
>
(
config: BartConfig
)
Parameters
config (BartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The BART Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_bart._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BartForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
ARTICLE_TO_SUMMARIZE = (
... "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
... )
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, max_length=20)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'PG&E scheduled the blackouts in response to forecasts for high winds amid dry conditions'
Mask filling example:
Copied
from transformers import AutoTokenizer, BartForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
TXT = "My friends are <mask> but they eat too many carbs."
input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
tokenizer.decode(predictions).split()
['not', 'good', 'healthy', 'great', 'very']
BartForSequenceClassification
class transformers.BartForSequenceClassification
<
source
>
(
config: BartConfig
**kwargs
)
Parameters
config (BartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_bart._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BartForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, BartForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("valhalla/bart-large-sst2")
model = BartForSequenceClassification.from_pretrained("valhalla/bart-large-sst2")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'POSITIVE'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BartForSequenceClassification.from_pretrained("valhalla/bart-large-sst2", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.0
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, BartForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("valhalla/bart-large-sst2")
model = BartForSequenceClassification.from_pretrained("valhalla/bart-large-sst2", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = BartForSequenceClassification.from_pretrained(
... "valhalla/bart-large-sst2", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
BartForQuestionAnswering
class transformers.BartForQuestionAnswering
<
source
>
(
config
)
Parameters
config (BartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
BART Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: Tensor = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should read modeling_bart._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BartForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BartForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("valhalla/bart-large-finetuned-squadv1")
model = BartForQuestionAnswering.from_pretrained("valhalla/bart-large-finetuned-squadv1")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
' nice puppet'
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
0.59
BartForCausalLM
class transformers.BartForCausalLM
<
source
>
(
config
)
Parameters
config (BartConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
BART decoder with with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, BartForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
model = BartForCausalLM.from_pretrained("facebook/bart-base", add_cross_attention=False)
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
list(logits.shape) == expected_shape
True
TFBartModel
class transformers.TFBartModel
<
source
>
(
*args
**kwargs
)
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare BART Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
decoder_position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BartConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFBartModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBartModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
model = TFBartModel.from_pretrained("facebook/bart-large")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFBartForConditionalGeneration
class transformers.TFBartForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The BART Model with a language modeling head. Can be used for summarization.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
decoder_position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: Optional[TFBaseModelOutput] = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BartConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFBartForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, TFBartForConditionalGeneration
model = TFBartForConditionalGeneration.from_pretrained("facebook/bart-large")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="tf")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=5)
print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
Mask filling example:
Copied
from transformers import AutoTokenizer, TFBartForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
TXT = "My friends are <mask> but they eat too many carbs."
model = TFBartForConditionalGeneration.from_pretrained("facebook/bart-large")
input_ids = tokenizer([TXT], return_tensors="tf")["input_ids"]
logits = model(input_ids).logits
probs = tf.nn.softmax(logits[0])
# probs[5] is associated with the mask token
TFBartForSequenceClassification
class transformers.TFBartForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
decoder_position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
decoder_head_mask: np.ndarray | tf.Tensor | None = None
cross_attn_head_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: Optional[TFBaseModelOutput] = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Bart uses the eos_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BartConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFBartForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
FlaxBartModel
class transformers.FlaxBartModel
<
source
>
(
config: BartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare Bart Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBartModel
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
model = FlaxBartModel.from_pretrained("facebook/bart-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
FlaxBartForConditionalGeneration
class transformers.FlaxBartForConditionalGeneration
<
source
>
(
config: BartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The BART Model with a language modeling head. Can be used for summarization.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="np")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"]).sequences
print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
Mask filling example:
Copied
import jax
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large")
TXT = "My friends are <mask> but they eat too many carbs."
input_ids = tokenizer([TXT], return_tensors="jax")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero()[0].item()
probs = jax.nn.softmax(logits[0, masked_index], axis=0)
values, predictions = jax.lax.top_k(probs, k=1)
tokenizer.decode(predictions).split()
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
FlaxBartForSequenceClassification
class transformers.FlaxBartForSequenceClassification
<
source
>
(
config: BartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE
tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBartForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
model = FlaxBartForSequenceClassification.from_pretrained("facebook/bart-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
FlaxBartForQuestionAnswering
class transformers.FlaxBartForQuestionAnswering
<
source
>
(
config: BartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
BART Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxBartPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBartForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
model = FlaxBartForQuestionAnswering.from_pretrained("facebook/bart-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.bart.configuration_bart.BartConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
FlaxBartForCausalLM
class transformers.FlaxBartForCausalLM
<
source
>
(
config: BartConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BartConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Bart Decoder Model with a language modeling head on top (linear layer with weights tied to the input embeddings)
e.g for autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
encoder_hidden_states: typing.Optional[jax.Array] = None
encoder_attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BartConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxBartDecoderPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBartForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
model = FlaxBartForCausalLM.from_pretrained("facebook/bart-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
←ALBERT
BARThez→
BART
Overview
Examples
Implementation Notes
Mask Filling
Resources
BartConfig
BartTokenizer
BartTokenizerFast
BartModel
BartForConditionalGeneration
BartForSequenceClassification
BartForQuestionAnswering
BartForCausalLM
TFBartModel
TFBartForConditionalGeneration
TFBartForSequenceClassification
FlaxBartModel
FlaxBartForConditionalGeneration
FlaxBartForSequenceClassification
FlaxBartForQuestionAnswering
FlaxBartForCausalLM
|
GPTBigCode
Overview
The GPTBigCode model was proposed in SantaCoder: don’t reach for the stars! by BigCode. The listed authors are: Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
The abstract from the paper is the following:uery
The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at this https URL.
The model is a an optimized GPT2 model with support for Multi-Query Attention.
Technical details
The main differences compared to GPT2.
Added support for Multi-Query Attention.
Use gelu_pytorch_tanh instead of classic gelu.
Avoid unnecessary synchronizations (this has since been added to GPT2 in #20061, but wasn’t in the reference codebase).
Use Linear layers instead of Conv1D (good speedup but makes the checkpoints incompatible).
Merge _attn and _upcast_and_reordered_attn. Always merge the matmul with scaling. Rename reorder_and_upcast_attn->attention_softmax_in_fp32
Cache the attention mask value to avoid recreating it every time.
Use jit to fuse the attention fp32 casting, masking, softmax, and scaling.
Combine the attention and causal masks into a single one, pre-computed for the whole model instead of every layer.
Merge the key and value caches into one (this changes the format of layer_past/ present, does it risk creating problems?)
Use the memory layout (self.num_heads, 3, self.head_dim) instead of (3, self.num_heads, self.head_dim) for the QKV tensor with MHA. (prevents an overhead with the merged key and values, but makes the checkpoints incompatible with the original gpt2 model).
You can read more about the optimizations in the original pull request
GPTBigCodeConfig
class transformers.GPTBigCodeConfig
<
source
>
(
vocab_size = 50257
n_positions = 1024
n_embd = 768
n_layer = 12
n_head = 12
n_inner = None
activation_function = 'gelu_pytorch_tanh'
resid_pdrop = 0.1
embd_pdrop = 0.1
attn_pdrop = 0.1
layer_norm_epsilon = 1e-05
initializer_range = 0.02
scale_attn_weights = True
use_cache = True
bos_token_id = 50256
eos_token_id = 50256
attention_softmax_in_fp32 = True
scale_attention_softmax_in_fp32 = True
multi_query = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50257) —
Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling GPTBigCodeModel.
n_positions (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (int, optional, defaults to 768) —
Dimensionality of the embeddings and hidden states.
n_layer (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
n_inner (int, optional, defaults to None) —
Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd
activation_function (str, optional, defaults to "gelu_pytorch_tanh") —
Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new", "gelu_pytorch_tanh"].
resid_pdrop (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the embeddings.
attn_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the attention.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_attn_weights (bool, optional, defaults to True) —
Scale attention weights by dividing by sqrt(hidden_size)..
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
attention_softmax_in_fp32 (bool, optional, defaults to True) —
Whether to call the fused softmax in float32.
scale_attention_softmax_in_fp32 (bool, optional, defaults to True) —
Whether to scale the attention softmax in float32.
attention_type (bool, optional, defaults to True) —
Whether to use Multi-Query Attion (True) or Multi-Head Attention (False).
This is the configuration class to store the configuration of a GPTBigCodeModel. It is used to instantiate a
GPTBigCode model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GPTBigCode
gpt_bigcode architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import GPTBigCodeConfig, GPTBigCodeModel
# Initializing a GPTBigCode configuration
configuration = GPTBigCodeConfig()
# Initializing a model (with random weights) from the configuration
model = GPTBigCodeModel(configuration)
# Accessing the model configuration
configuration = model.config
GPTBigCodeModel
class transformers.GPTBigCodeModel
<
source
>
(
config
)
Parameters
config (GPTBigCodeConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPT_BIGCODE Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.Tensor]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[torch.Tensor] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.Tensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTBigCodeConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The GPTBigCodeModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPTBigCodeModel
import torch
tokenizer = AutoTokenizer.from_pretrained("bigcode/gpt_bigcode-santacoder")
model = GPTBigCodeModel.from_pretrained("bigcode/gpt_bigcode-santacoder")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
GPTBigCodeForCausalLM
class transformers.GPTBigCodeForCausalLM
<
source
>
(
config
)
Parameters
config (GPTBigCodeConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT_BIGCODE Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[torch.Tensor] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.Tensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.Tensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTBigCodeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The GPTBigCodeForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, GPTBigCodeForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bigcode/gpt_bigcode-santacoder")
model = GPTBigCodeForCausalLM.from_pretrained("bigcode/gpt_bigcode-santacoder")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
GPTBigCodeForSequenceClassification
class transformers.GPTBigCodeForSequenceClassification
<
source
>
(
config
)
Parameters
config (GPTBigCodeConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPTBigCode Model transformer with a sequence classification head on top (linear layer).
GPTBigCodeForSequenceClassification uses the last token in order to do the classification, as other causal
models (e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[torch.Tensor] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.Tensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
The GPTBigCodeForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
GPTBigCodeForTokenClassification
class transformers.GPTBigCodeForTokenClassification
<
source
>
(
config
)
Parameters
config (GPTBigCodeConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
GPT_BIGCODE Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (Tuple[torch.Tensor] of length config.n_layers) —
Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input_ids as they have already been computed.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
If past_key_values is used, attention_mask needs to contain the masking strategy that was used for
past_key_values. In other words, the attention_mask always has to have the length:
len(past_key_values) + len(input_ids)
What are attention masks?
token_type_ids (torch.Tensor of shape (batch_size, input_ids_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
If past_key_values is used, optionally only the last inputs_embeds have to be input (see
past_key_values).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
The GPTBigCodeForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←GPT2
GPTSAN Japanese→
GPTBigCode
Overview
Technical details
GPTBigCodeConfig
GPTBigCodeModel
GPTBigCodeForCausalLM
GPTBigCodeForSequenceClassification
GPTBigCodeForTokenClassification
|
with SegFormer is by checking the example notebooks (which showcase both inference and
fine-tuning on custom data). One can also check out the blog post introducing SegFormer and illustrating how it can be fine-tuned on custom data.
TensorFlow users should refer to this repository that shows off-the-shelf inference and fine-tuning.
One can also check out this interactive demo on Hugging Face Spaces
to try out a SegFormer model on custom images.
SegFormer works on any input size, as it pads the input to be divisible by config.patch_sizes.
One can use SegformerImageProcessor to prepare images and corresponding segmentation maps
for the model. Note that this image processor is fairly basic and does not include all data augmentations used in
the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found here. The most
important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size,
such as 512x512 or 640x640, after which they are normalized.
One additional thing to keep in mind is that one can initialize SegformerImageProcessor with
reduce_labels set to True or False. In some datasets (like ADE20k), the 0 index is used in the annotated
segmentation maps for background. However, ADE20k doesn’t include the “background” class in its 150 labels.
Therefore, reduce_labels is used to reduce all labels by 1, and to make sure no loss is computed for the
background class (i.e. it replaces 0 in the annotated maps by 255, which is the ignore_index of the loss function
used by SegformerForSemanticSegmentation). However, other datasets use the 0 index as
background class and include this class as part of all labels. In that case, reduce_labels should be set to
False, as loss should also be computed for the background class.
As most models, SegFormer comes in different sizes, the details of which can be found in the table below
(taken from Table 7 of the original paper).
Model variant
Depths
Hidden sizes
Decoder hidden size
Params (M)
ImageNet-1k Top 1
MiT-b0
[2, 2, 2, 2]
[32, 64, 160, 256]
256
3.7
70.5
MiT-b1
[2, 2, 2, 2]
[64, 128, 320, 512]
256
14.0
78.7
MiT-b2
[3, 4, 6, 3]
[64, 128, 320, 512]
768
25.4
81.6
MiT-b3
[3, 4, 18, 3]
[64, 128, 320, 512]
768
45.2
83.1
MiT-b4
[3, 8, 27, 3]
[64, 128, 320, 512]
768
62.6
83.6
MiT-b5
[3, 6, 40, 3]
[64, 128, 320, 512]
768
82.0
83.8
Note that MiT in the above table refers to the Mix Transformer encoder backbone introduced in SegFormer. For
SegFormer’s results on the segmentation datasets like ADE20k, refer to the paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SegFormer.
Image Classification
SegformerForImageClassification is supported by this example script and notebook.
Image classification task guide
Semantic segmentation:
SegformerForSemanticSegmentation is supported by this example script.
A blog on fine-tuning SegFormer on a custom dataset can be found here.
More demo notebooks on SegFormer (both inference + fine-tuning on a custom dataset) can be found here.
TFSegformerForSemanticSegmentation is supported by this example notebook.
Semantic segmentation task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
SegformerConfig
class transformers.SegformerConfig
<
source
>
(
num_channels = 3
num_encoder_blocks = 4
depths = [2, 2, 2, 2]
sr_ratios = [8, 4, 2, 1]
hidden_sizes = [32, 64, 160, 256]
patch_sizes = [7, 3, 3, 3]
strides = [4, 2, 2, 2]
num_attention_heads = [1, 2, 5, 8]
mlp_ratios = [4, 4, 4, 4]
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
classifier_dropout_prob = 0.1
initializer_range = 0.02
drop_path_rate = 0.1
layer_norm_eps = 1e-06
decoder_hidden_size = 256
semantic_loss_ignore_index = 255
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
num_encoder_blocks (int, optional, defaults to 4) —
The number of encoder blocks (i.e. stages in the Mix Transformer encoder).
depths (List[int], optional, defaults to [2, 2, 2, 2]) —
The number of layers in each encoder block.
sr_ratios (List[int], optional, defaults to [8, 4, 2, 1]) —
Sequence reduction ratios in each encoder block.
hidden_sizes (List[int], optional, defaults to [32, 64, 160, 256]) —
Dimension of each of the encoder blocks.
patch_sizes (List[int], optional, defaults to [7, 3, 3, 3]) —
Patch size before each encoder block.
strides (List[int], optional, defaults to [4, 2, 2, 2]) —
Stride before each encoder block.
num_attention_heads (List[int], optional, defaults to [1, 2, 5, 8]) —
Number of attention heads for each attention layer in each block of the Transformer encoder.
mlp_ratios (List[int], optional, defaults to [4, 4, 4, 4]) —
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
classifier_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability before the classification head.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
drop_path_rate (float, optional, defaults to 0.1) —
The dropout probability for stochastic depth, used in the blocks of the Transformer encoder.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
decoder_hidden_size (int, optional, defaults to 256) —
The dimension of the all-MLP decode head.
semantic_loss_ignore_index (int, optional, defaults to 255) —
The index that is ignored by the loss function of the semantic segmentation model.
This is the configuration class to store the configuration of a SegformerModel. It is used to instantiate an
SegFormer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the SegFormer
nvidia/segformer-b0-finetuned-ade-512-512
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import SegformerModel, SegformerConfig
# Initializing a SegFormer nvidia/segformer-b0-finetuned-ade-512-512 style configuration
configuration = SegformerConfig()
# Initializing a model from the nvidia/segformer-b0-finetuned-ade-512-512 style configuration
model = SegformerModel(configuration)
# Accessing the model configuration
configuration = model.config
SegformerFeatureExtractor
class transformers.SegformerFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
segmentation_maps = None
**kwargs
)
Preprocesses a batch of images and optionally segmentation maps.
Overrides the __call__ method of the Preprocessor class so that both images and segmentation maps can be
passed in as positional arguments.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple] = None
)
→
semantic_segmentation
Parameters
outputs (SegformerForSemanticSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple] of length batch_size, optional) —
List of tuples corresponding to the requested final size (height, width) of each prediction. If left to
None, predictions will not be resized.
Returns
semantic_segmentation
List[torch.Tensor] of length batch_size, where each item is a semantic
segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is
specified). Each entry of each torch.Tensor correspond to a semantic class id.
Converts the output of SegformerForSemanticSegmentation into semantic segmentation maps. Only supports
PyTorch.
SegformerImageProcessor
class transformers.SegformerImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_reduce_labels: bool = False
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified (size["height"], size["width"]). Can be overridden by the do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"height" -- 512, "width": 512}):
Size of the output image after resizing. Can be overridden by the size parameter in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
do_reduce_labels (bool, optional, defaults to False) —
Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is
used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The
background label will be replaced by 255. Can be overridden by the do_reduce_labels parameter in the
preprocess method.
Constructs a Segformer image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')], NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_reduce_labels: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
segmentation_maps (ImageInput, optional) —
Segmentation map to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resize is applied.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
do_reduce_labels (bool, optional, defaults to self.do_reduce_labels) —
Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g.
ADE20k). The background label will be replaced by 255.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple] = None
)
→
semantic_segmentation
Parameters
outputs (SegformerForSemanticSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple] of length batch_size, optional) —
List of tuples corresponding to the requested final size (height, width) of each prediction. If left to
None, predictions will not be resized.
Returns
semantic_segmentation
List[torch.Tensor] of length batch_size, where each item is a semantic
segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is
specified). Each entry of each torch.Tensor correspond to a semantic class id.
Converts the output of SegformerForSemanticSegmentation into semantic segmentation maps. Only supports
PyTorch.
SegformerModel
class transformers.SegformerModel
<
source
>
(
config
)
Parameters
config (SegformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare SegFormer encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See SegformerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SegformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SegformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, SegformerModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("nvidia/mit-b0")
model = SegformerModel.from_pretrained("nvidia/mit-b0")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 256, 16, 16]
SegformerDecodeHead
class transformers.SegformerDecodeHead
<
source
>
(
config
)
forward
<
source
>
(
encoder_hidden_states: FloatTensor
)
SegformerForImageClassification
class transformers.SegformerForImageClassification
<
source
>
(
config
)
Parameters
config (SegformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SegFormer Model transformer with an image classification head on top (a linear layer on top of the final hidden
states) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.segformer.modeling_segformer.SegFormerImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See SegformerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.segformer.modeling_segformer.SegFormerImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.segformer.modeling_segformer.SegFormerImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SegformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SegformerForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, SegformerForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("nvidia/mit-b0")
model = SegformerForImageClassification.from_pretrained("nvidia/mit-b0")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
SegformerForSemanticSegmentation
class transformers.SegformerForSemanticSegmentation
<
source
>
(
config
)
Parameters
config (SegformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SegFormer Model transformer with an all-MLP decode head on top e.g. for ADE20k, CityScapes.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See SegformerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SegformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SegformerForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import requests
image_processor = AutoImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
list(logits.shape)
[1, 150, 128, 128]
TFSegformerDecodeHead
class transformers.TFSegformerDecodeHead
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
encoder_hidden_states
training: bool = False
)
TFSegformerModel
class transformers.TFSegformerModel
<
source
>
(
*args
**kwargs
)
Parameters
config (SegformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare SegFormer encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: tf.Tensor
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
SegformerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (SegformerConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFSegformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFSegformerModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("nvidia/mit-b0")
model = TFSegformerModel.from_pretrained("nvidia/mit-b0")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 256, 16, 16]
TFSegformerForImageClassification
class transformers.TFSegformerForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (SegformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SegFormer Model transformer with an image classification head on top (a linear layer on top of the final hidden
states) e.g. for ImageNet.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: tf.Tensor | None = None
labels: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
SegformerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (SegformerConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFSegformerForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFSegformerForImageClassification
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("nvidia/mit-b0")
model = TFSegformerForImageClassification.from_pretrained("nvidia/mit-b0")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
tabby, tabby cat
TFSegformerForSemanticSegmentation
class transformers.TFSegformerForSemanticSegmentation
<
source
>
(
*args
**kwargs
)
Parameters
config (SegformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SegFormer Model transformer with an all-MLP decode head on top e.g. for ADE20k, CityScapes.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
call
<
source
>
(
pixel_values: tf.Tensor
labels: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
)
→
transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
SegformerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a (per-pixel) classification loss is computed
(Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (SegformerConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFSegformerForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFSegformerForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = TFSegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
inputs = image_processor(images=image, return_tensors="tf")
outputs = model(**inputs, training=False)
# logits are of shape (batch_size, num_labels, height/4, width/4)
logits = outputs.logits
list(logits.shape)
[1, 150, 128, 128]
←ResNet
SwiftFormer→
SegFormer
Overview
Resources
SegformerConfig
SegformerFeatureExtractor
SegformerImageProcessor
SegformerModel
SegformerDecodeHead
SegformerForImageClassification
SegformerForSemanticSegmentation
TFSegformerDecodeHead
TFSegformerModel
TFSegformerForImageClassification
TFSegformerForSemanticSegmentation
|
CPMAnt
Overview
CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. See more
Tips:
This model was contributed by OpenBMB. The original code can be found here.
⚙️ Training & Inference
A tutorial on CPM-Live.
CpmAntConfig
class transformers.CpmAntConfig
<
source
>
(
vocab_size: int = 30720
hidden_size: int = 4096
num_attention_heads: int = 32
dim_head: int = 128
dim_ff: int = 10240
num_hidden_layers: int = 48
dropout_p: int = 0.0
position_bias_num_buckets: int = 512
position_bias_max_distance: int = 2048
eps: int = 1e-06
init_std: float = 1.0
prompt_types: int = 32
prompt_length: int = 32
segment_types: int = 32
use_cache: bool = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30720) —
Vocabulary size of the CPMAnt model. Defines the number of different tokens that can be represented by the
input passed when calling CpmAntModel.
hidden_size (int, optional, defaults to 4096) —
Dimension of the encoder layers.
num_attention_heads (int, optional, defaults to 32) —
Number of attention heads in the Transformer encoder.
dim_head (int, optional, defaults to 128) —
Dimension of attention heads for each attention layer in the Transformer encoder.
dim_ff (int, optional, defaults to 10240) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 48) —
Number of layers of the Transformer encoder.
dropout_p (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder.
position_bias_num_buckets (int, optional, defaults to 512) —
The number of position_bias buckets.
position_bias_max_distance (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
prompt_types (int, optional, defaults to 32) —
The type of prompt.
prompt_length (int, optional, defaults to 32) —
The length of prompt.
segment_types (int, optional, defaults to 32) —
The type of segment.
use_cache (bool, optional, defaults to True) —
Whether to use cache.
init_std (float, optional, defaults to 1.0) —
Initialize parameters with std = init_std.
This is the configuration class to store the configuration of a CpmAntModel. It is used to instantiate an
CPMAnt model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the CPMAnt
openbmb/cpm-ant-10b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CpmAntModel, CpmAntConfig
# Initializing a CPMAnt cpm-ant-10b style configuration
configuration = CpmAntConfig()
# Initializing a model from the cpm-ant-10b style configuration
model = CpmAntModel(configuration)
# Accessing the model configuration
configuration = model.config
CpmAntTokenizer
class transformers.CpmAntTokenizer
<
source
>
(
vocab_file
bod_token = '<d>'
eod_token = '</d>'
bos_token = '<s>'
eos_token = '</s>'
pad_token = '<pad>'
unk_token = '<unk>'
line_token = '</n>'
space_token = '</_>'
padding_side = 'left'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bod_token (str, optional, defaults to "<d>") —
The beginning of document token.
eod_token (str, optional, defaults to "</d>") —
The end of document token.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding.
unk_token (str, optional, defaults to "<unk>") —
The unknown token.
line_token (str, optional, defaults to "</n>") —
The line token.
space_token (str, optional, defaults to "</_>") —
The space token.
Construct a CPMAnt tokenizer. Based on byte-level Byte-Pair-Encoding.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.List[int] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence that special tokens will be added.
token_ids_1 (List[int]) — The optional second tokenized sequence that special tokens will be added.
Returns
List[int]
The model input with special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A CPMAnt sequence has the following format:
single sequence: [BOS] Sequence.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) — List of IDs.
token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
CpmAntModel
class transformers.CpmAntModel
<
source
>
(
config: CpmAntConfig
)
The bare CPMAnt Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters
config (~CpmAntConfig): Model configuration class with all the parameters of the
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
use_cache: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.Tensor of shape (batch_size, seq_len)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using CPMAntTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CpmAntConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CpmAntModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CpmAntModel
import torch
tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-ant-10b")
model = CpmAntModel.from_pretrained("openbmb/cpm-ant-10b")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
CpmAntForCausalLM
class transformers.CpmAntForCausalLM
<
source
>
(
config: CpmAntConfig
)
The CPMAnt Model with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
Parameters
config (~CpmAntConfig): Model configuration class with all the parameters of the
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Union[typing.List[typing.Tuple[torch.Tensor, torch.Tensor]], NoneType] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
attention_mask: typing.Optional[torch.Tensor] = None
**kwargs
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.Tensor of shape (batch_size, seq_len)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using CPMAntTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Args —
input_ids (torch.Tensor of shape (batch_size, seq_len)):
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using CPMAntTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True):
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
use_cache (bool, optional):
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
output_attentions (bool, optional):
Whether or not to return the attentions tensors of all attention layers.
output_hidden_states (bool, optional):
Whether or not to return the hidden states of all layers.
labels (torch.Tensor of shape (batch_size, sequence_length), optional):
Labels for computing the masked language modeling loss.
return_dict (bool, optional):
Whether or not to return a ModelOutput instead of a plain tuple.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional):
CPMAnt will process attention mask automatically, this parameter is a dummy parameter for
text-generation pipeline.
Example —
Text Generation with CpmAntForCausalLM. —
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CpmAntConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CpmAntForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, CpmAntForCausalLM
tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-ant-10b")
model = CpmAntForCausalLM.from_pretrained("openbmb/cpm-ant-10b")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
←CPM
CTRL→
CPMAnt
Overview
CpmAntConfig
CpmAntTokenizer
CpmAntModel
CpmAntForCausalLM
|