Transformers documentation

Qwen2.5-Omni

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.56.2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

This model was released on 2025-03-26 and added to Hugging Face Transformers on 2025-04-14.

Qwen2.5-Omni

PyTorch FlashAttention SDPA

Overview

The Qwen2.5-Omni model is a unified multiple modalities model proposed in Qwen2.5-Omni Technical Report from Qwen team, Alibaba Group.

The abstract from the technical report is the following:

We present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. This strategy effectively decouples the handling of long sequences of multimodal data, assigning the perceptual responsibilities to the multimodal encoder and entrusting the modeling of extended sequences to a large language model. Such a division of labor enhances the fusion of different modalities via the shared attention mechanism. To synchronize the timestamps of video inputs with audio, we organized the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE (Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose Thinker-Talker architecture. In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni outperforms the similarly sized Qwen2-VL and Qwen2-Audio in both image and audio capabilities. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench. Notably, Qwen2.5-Omni is the first open-source model to achieve a level of performance in end-to-end speech instruction following that is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni’s streaming Talker outperform most existing streaming and non-streaming alternatives in robustness and naturalness.

Notes

  • Use Qwen2_5OmniForConditionalGeneration to generate audio and text output. To generate only one output type, use Qwen2_5OmniThinkerForConditionalGeneration for text-only and Qwen2_5OmniTalkersForConditionalGeneration for audio-only outputs.
  • Audio generation with Qwen2_5OmniForConditionalGeneration supports only single batch size at the moment.
  • In case out out-of-memory errors hwen working with video input, decrease processor.max_pixels. By default the maximum is set to a very arge value and high resolution visuals will not be resized, unless resolution exceeds processor.max_pixels.
  • The processor has its own apply_chat_template() method to convert chat messages to model inputs.

Usage example

Qwen2.5-Omni can be found on the Huggingface Hub.

Single Media inference

The model can accept text, images, audio and videos as input. Here’s an example code for inference.

import soundfile as sf
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor

model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-Omni-7B",
    dtype="auto",
    device_map="auto"
)
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")

conversations = [
    {
        "role": "system",
        "content": [
            {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
        ],
    },
    {
        "role": "user",
        "content": [
            {"type": "video", "video": "/path/to/video.mp4"},
            {"type": "text", "text": "What cant you hear and see in this video?"},
        ],
    },
]

inputs = processor.apply_chat_template(
    conversations,
    load_audio_from_video=True,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
    video_fps=1,

    # kwargs to be passed to `Qwen2-5-OmniProcessor`
    padding=True,
    use_audio_in_video=True,
).to(model.device)

# Generation params for audio or text can be different and have to be prefixed with `thinker_` or `talker_`
text_ids, audio = model.generate(**inputs, use_audio_in_video=True, thinker_do_sample=False, talker_do_sample=True)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)

sf.write(
    "output.wav",
    audio.reshape(-1).detach().cpu().numpy(),
    samplerate=24000,
)
print(text)

Text-only generation

To generate only text output and save compute by not loading the audio generation model, we can use Qwen2_5OmniThinkerForConditionalGeneration model.

from transformers import Qwen2_5OmniThinkerForConditionalGeneration, Qwen2_5OmniProcessor

model = Qwen2_5OmniThinkerForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-Omni-7B",
    dtype="auto",
    device_map="auto",
)
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")

conversations = [
    {
        "role": "system",
        "content": [
            {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
        ],
    },
    {
        "role": "user",
        "content": [
            {"type": "video", "video": "/path/to/video.mp4"},
            {"type": "text", "text": "What cant you hear and see in this video?"},
        ],
    },
]

inputs = processor.apply_chat_template(
    conversations,
    load_audio_from_video=True,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
    video_fps=1,

    # kwargs to be passed to `Qwen2-5-OmniProcessor`
    padding=True,
    use_audio_in_video=True,
).to(model.device)


text_ids = model.generate(**inputs, use_audio_in_video=True)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)

sf.write(
    "output.wav",
    audio.reshape(-1).detach().cpu().numpy(),
    samplerate=24000,
)
print(text)

Batch Mixed Media Inference

The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when using Qwen2_5OmniThinkerForConditionalGeneration model. Here is an example.

import soundfile as sf
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor

model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-Omni-7B",
    dtype="auto",
    device_map="auto"
)
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")

# Conversation with video only
conversation1 = [
    {
        "role": "system",
        "content": [
            {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
        ],
    },
    {
        "role": "user",
        "content": [
            {"type": "video", "path": "/path/to/video.mp4"},
        ]
    }
]

# Conversation with audio only
conversation2 = [
    {
        "role": "system",
        "content": [
            {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
        ],
    },
    {
        "role": "user",
        "content": [
            {"type": "audio", "path": "/path/to/audio.wav"},
        ]
    }
]

# Conversation with pure text
conversation3 = [
    {
        "role": "system",
        "content": [
            {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
        ],
    },
    {
        "role": "user",
        "content": [{"type": "text", "text": "who are you?"}],
    }
]


# Conversation with mixed media
conversation4 = [
    {
        "role": "system",
        "content": [
            {"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
        ],
    },
    {
        "role": "user",
        "content": [
            {"type": "image", "path": "/path/to/image.jpg"},
            {"type": "video", "path": "/path/to/video.mp4"},
            {"type": "audio", "path": "/path/to/audio.wav"},
            {"type": "text", "text": "What are the elements can you see and hear in these medias?"},
        ],
    }
]

conversations = [conversation1, conversation2, conversation3, conversation4]

inputs = processor.apply_chat_template(
    conversations,
    load_audio_from_video=True,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
    video_fps=1,

    # kwargs to be passed to `Qwen2-5-OmniProcessor`
    padding=True,
    use_audio_in_video=True,
).to(model.thinker.device)

text_ids = model.generate(**inputs, use_audio_in_video=True)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)

print(text)

Usage Tips

Image Resolution trade-off

The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs.

min_pixels = 128*28*28
max_pixels = 768*28*28
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B", min_pixels=min_pixels, max_pixels=max_pixels)

Prompt for audio output

If users need audio output, the system prompt must be set as “You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.”, otherwise the audio output may not work as expected.

{
    "role": "system",
    "content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
}

Use audio output or not

The model supports both text and audio outputs, if users do not need audio outputs, they can set enable_audio_output in the from_pretrained function. This option will save about ~2GB of GPU memory but the return_audio option for generate function will only allow to be set at False.

model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-Omni-7B",
    dtype="auto",
    device_map="auto",
    enable_audio_output=False,
)

In order to obtain a flexible experience, we recommend that users set enable_audio_output at True when initializing the model through from_pretrained function, and then decide whether to return audio when generate function is called. When return_audio is set to False, the model will only return text outputs to get text responses faster.

model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-Omni-7B",
    dtype="auto",
    device_map="auto",
    enable_audio_output=True,
)
...
text_ids = model.generate(**inputs, return_audio=False)

Change voice type of output audio

Qwen2.5-Omni supports the ability to change the voice of the output audio. Users can use the spk parameter of generate function to specify the voice type. The "Qwen/Qwen2.5-Omni-7B" checkpoint support two voice types: Chelsie and Ethan, while Chelsie is a female voice and Ethan is a male voice. By default, if spk is not specified, the default voice type is Chelsie.

text_ids, audio = model.generate(**inputs, spk="Chelsie")
text_ids, audio = model.generate(**inputs, spk="Ethan")

Flash-Attention 2 to speed up generation

First, make sure to install the latest version of Flash Attention 2:

pip install -U flash-attn --no-build-isolation

Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the flash attention repository. FlashAttention-2 can only be used when a model is loaded in torch.float16 or torch.bfloat16.

To load and run a model using FlashAttention-2, add attn_implementation="flash_attention_2" when loading the model:

from transformers import Qwen2_5OmniForConditionalGeneration

model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-Omni-7B",
    device_map="auto",
    dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
)

Qwen3OmniMoeConfig

class transformers.Qwen3OmniMoeConfig

< >

( thinker_config = None talker_config = None code2wav_config = None enable_audio_output = True im_start_token_id = 151644 im_end_token_id = 151645 tts_pad_token_id = 151671 tts_bos_token_id = 151672 tts_eos_token_id = 151673 system_token_id = 8948 user_token_id = 872 assistant_token_id = 77091 **kwargs )

Parameters

  • thinker_config (dict, optional) — Configuration of the underlying thinker sub-model.
  • talker_config (dict, optional) — Configuration of the underlying talker sub-model.
  • code2wav_config (dict, optional) — Configuration of the underlying code2wav sub-model.
  • enable_audio_output (bool, optional, defaults to True) — Whether enable audio output and load talker and code2wav module.

This is the configuration class to store the configuration of a Qwen3OmniMoeForConditionalGeneration. It is used to instantiate a Qwen3Omni model according to the specified sub-models configurations, defining the model architecture.

Instantiating a configuration with the defaults will yield a similar configuration to that of the Qwen/Qwen2.5-Omni-7B architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import (
...     Qwen3OmniMoeThinkerConfig,
...     Qwen3OmniMoeTalkerConfig,
...     Qwen3OmniMoeCode2WavConfig,
...     Qwen3OmniMoeForConditionalGeneration,
...     Qwen3OmniMoeConfig,
... )

>>> # Initializing a Qwen3OmniMoe style configuration
>>> configuration = Qwen3OmniMoeConfig()

>>> # Initializing a model from the configuration
>>> model = Qwen3OmniMoeForConditionalGeneration(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

get_text_config

< >

( decoder = False )

Parameters

  • decoder (Optional[bool], optional, defaults to False) — If set to True, then only search for decoder config names.

Returns the config that is meant to be used with text IO. On most models, it is the original config instance itself. On specific composite models, it is under a set of valid names.

Qwen3OmniMoeThinkerConfig

class transformers.Qwen3OmniMoeThinkerConfig

< >

( audio_config = None vision_config = None text_config = None audio_token_id = 151646 image_token_id = 151655 video_token_id = 151656 position_id_per_seconds = 25 audio_start_token_id = 151647 user_token_id = 872 initializer_range = 0.02 **kwargs )

Parameters

  • audio_config (dict, optional) — The config dictionary of the audio backbone.
  • vision_config (dict, optional) — The config dictionary of the vision backbone.
  • text_config (dict, optional) — The config dictionary of the text backbone.
  • audio_token_id (int, optional, defaults to 151646) — The audio token id to encode the audio prompt.
  • image_token_id (int, optional, defaults to 151655) — The image token id to encode the image prompt.
  • video_token_id (int, optional, defaults to 151656) — The video token id to encode the video prompt.
  • position_id_per_seconds (int, optional, defaults to 25) — The increment of position id per second.
  • audio_start_token_id (int, optional, defaults to 151647) — The audio start token id to encode the audio prompt.
  • user_token_id (int, optional, defaults to 872) — The user token id to encode the user token.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

This is the configuration class to store the configuration of a Qwen3OmniMoeThinker. It is used to instantiate a Qwen3-Omni-Thinker model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the thinker component of the Qwen3-Omni architecture.

e.g. Qwen/Qwen3-Omni-7B

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import Qwen3OmniMoeThinkerModel, Qwen3OmniMoeThinkerConfig

>>> # Initializing a default Qwen3OmniMoeThinkerConfig
>>> configuration = Qwen3OmniMoeThinkerConfig()

>>> # Initializing a model (with random weights) from the default configuration
>>> model = Qwen3OmniMoeThinkerModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

Qwen3OmniMoeTalkerConfig

class transformers.Qwen3OmniMoeTalkerConfig

< >

( code_predictor_config = None text_config = None num_code_groups = 32 thinker_hidden_size = 2048 codec_eos_token_id = 4198 accept_hidden_layer = 18 codec_nothink_id = 4203 codec_think_bos_id = 4204 codec_think_eos_id = 4205 codec_pad_id = 4196 codec_bos_id = 4197 audio_token_id = 151646 image_token_id = 151655 video_token_id = 151656 vision_start_token_id = 151652 position_id_per_seconds = 25 audio_start_token_id = 151669 speaker_id = None **kwargs )

Parameters

  • code_predictor_config (dict, optional) — A dictionary of configuration parameters used to initialize a Qwen3OmniMoeTalkerCodePredictorConfig. If not provided, defaults will be used.
  • text_config (dict, optional) — A dictionary of configuration parameters used to initialize a Qwen3OmniMoeTalkerTextConfig. If not provided, defaults will be used.
  • num_code_groups (int, optional, defaults to 32) — Number of codebook groups used in the predicted acoustic token sequence, corresponding to multi-codebook VQ representation.
  • thinker_hidden_size (int, optional, defaults to 2048) — Hidden dimension size of the thinker module used for intermediate reasoning or latent planning before audio generation.
  • codec_eos_token_id (int, optional, defaults to 4198) — Token ID representing the end-of-speech token in the codec-generated sequence.
  • accept_hidden_layer (int, optional, defaults to 18) — Index of the hidden layer whose output is used for accepting or refining generated tokens during think-and-speak process.
  • codec_nothink_id (int, optional, defaults to 4203) — Token ID indicating no thinking step is required during generation.
  • codec_think_bos_id (int, optional, defaults to 4204) — Token ID marking the beginning of a thinking sequence.
  • codec_think_eos_id (int, optional, defaults to 4205) — Token ID marking the end of a thinking sequence.
  • codec_pad_id (int, optional, defaults to 4196) — Padding token ID used in codec input sequences.
  • codec_bos_id (int, optional, defaults to 4197) — Beginning-of-speech token ID in codec sequences.
  • audio_token_id (int, optional, defaults to 151646) — Special token ID used to indicate the position of audio tokens in the input sequence.
  • image_token_id (int, optional, defaults to 151655) — Special token ID used to represent image inputs in the multimodal context.
  • video_token_id (int, optional, defaults to 151656) — Special token ID used to represent video inputs.
  • vision_start_token_id (int, optional, defaults to 151652) — Token ID indicating the start of a visual input sequence (e.g., image or video embeddings).
  • position_id_per_seconds (int, optional, defaults to 25) — Number of position IDs allocated per second of audio content, used for temporal alignment in generation.
  • audio_start_token_id (int, optional, defaults to 151669) — Token ID that indicates the start of an audio generation segment in the output.
  • speaker_id (dict, optional) — Speaker name to speaker id dict.

This is the configuration class to store the configuration of a Qwen3OmniMoeTalker. It is used to instantiate a Qwen3-Omni multi-modal talker model capable of handling text, audio, and vision modalities in a unified architecture. The model integrates a text decoder with a code predictor for autoregressive generation of both semantic and acoustic tokens, enabling speech and multimodal content generation. This configuration wraps sub-configurations for the text and code predictor components, allowing modular setup and initialization.

e.g. Qwen/Qwen3-Omni-7B

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import Qwen3OmniMoeTalkerConfig, Qwen3OmniMoeTalker

>>> # Initialize a Qwen3OmniMoeTalkerConfig with default sub-configurations
>>> config = Qwen3OmniMoeTalkerConfig(
...     num_code_groups=32,
...     thinker_hidden_size=2048,
... )

>>> # Initialize the full Qwen3-Omni Talker model
>>> model = Qwen3OmniMoeTalker(config)

>>> # Access the model configuration
>>> config = model.config
>>> print(config.text_config)  # Access text decoder configuration
>>> print(config.code_predictor_config)  # Access code predictor configuration

Qwen3OmniMoeForConditionalGeneration

class transformers.Qwen3OmniMoeForConditionalGeneration

< >

( config: Qwen3OmniMoeConfig )

Qwen3OmniMoeThinkerTextModel

class transformers.Qwen3OmniMoeThinkerTextModel

< >

( config: Qwen3OmniMoeTextConfig )

Parameters

  • config (Qwen3OmniMoeTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Text part of Qwen3OmniMoeThinker, not a pure text-only model, as DeepStack integrates visual features into the early hidden states.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[transformers.cache_utils.Cache] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None visual_pos_masks: typing.Optional[torch.Tensor] = None deepstack_visual_embeds: typing.Optional[list[torch.Tensor]] = None **kwargs: typing_extensions.Unpack[transformers.modeling_flash_attention_utils.FlashAttentionKwargs] ) transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values returned by the model at a previous stage of decoding, when use_cache=True or config.use_cache=True.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • cache_position (torch.LongTensor of shape (sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
  • visual_pos_masks (torch.Tensor of shape (batch_size, seqlen), optional) — The mask of the visual positions.
  • deepstack_visual_embeds (list[torch.Tensor], optional) — The deepstack visual embeddings. The shape is (num_layers, visual_seqlen, embed_dim). The feature is extracted from the different visual encoder layers, and fed to the decoder hidden states. It’s from the paper DeepStack(https://arxiv.org/abs/2406.04334).

Returns

transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Qwen3OmniMoeConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

    If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The Qwen3OmniMoeThinkerTextModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Qwen3OmniMoeThinkerForConditionalGeneration

class transformers.Qwen3OmniMoeThinkerForConditionalGeneration

< >

( config )

Parameters

The Qwen2.5OmniThinker model which consists of a audio backbone and a language model.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids = None input_features = None pixel_values = None pixel_values_videos = None image_grid_thw = None video_grid_thw = None attention_mask = None feature_attention_mask = None audio_feature_lengths = None position_ids = None past_key_values = None inputs_embeds = None rope_deltas = None labels = None use_cache = None output_router_logits: typing.Optional[bool] = None use_audio_in_video = None cache_position = None video_second_per_grid = None **kwargs ) transformers.models.qwen3_omni_moe.modeling_qwen3_omni_moe.Qwen3OmniMoeThinkerCausalLMOutputWithPast or tuple(torch.FloatTensor)

Parameters

  • input_ids (`of shape(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • input_features (`of shape(batch_size, sequence_length, feature_dim)) -- The tensors corresponding to the input audio features. Audio features can be obtained using feature_extractor_class. See feature_extractor_class.callfor details ([Qwen3OmniMoeProcessor](/docs/transformers/main/en/model_doc/qwen3_omni_moe#transformers.Qwen3OmniMoeProcessor) usesfeature_extractor_class` for processing audios).
  • pixel_values (`of shape(batch_size, num_channels, image_size, image_size)) -- The tensors corresponding to the input images. Pixel values can be obtained using image_processor_class. See image_processor_class.callfor details ([Qwen3OmniMoeProcessor](/docs/transformers/main/en/model_doc/qwen3_omni_moe#transformers.Qwen3OmniMoeProcessor) usesimage_processor_class` for processing images).
  • pixel_values_videos (`of shape(batch_size, num_frames, num_channels, frame_size, frame_size)) -- The tensors corresponding to the input video. Pixel values for videos can be obtained using [Qwen2VLVideoProcessor](/docs/transformers/main/en/model_doc/qwen2_vl#transformers.Qwen2VLVideoProcessor). See Qwen2VLVideoProcessor.call()` for details (Qwen3OmniMoeProcessor uses Qwen2VLVideoProcessor for processing videos).
  • image_grid_thw (torch.LongTensor of shape (num_images, 3), optional) — The temporal, height and width of feature shape of each image in LLM.
  • video_grid_thw (torch.LongTensor of shape (num_videos, 3), optional) — The temporal, height and width of feature shape of each video in LLM.
  • attention_mask (`of shape(batch_size, sequence_length)) -- Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]`:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • feature_attention_mask (torch.Tensor of shape (batch_size, feature_sequence_length), optional) — Mask to avoid performing attention on padding feature indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.
  • audio_feature_lengths (torch.LongTensor of shape (num_audios), optional) — The length of feature shape of each audio in LLM.
  • position_ids (`of shape(batch_size, sequence_length)) -- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]`.

    What are position IDs?

  • past_key_values (`) -- Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True`.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (`of shape(batch_size, sequence_length, hidden_size)) -- Optionally, instead of passing input_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids` indices into associated vectors than the model’s internal embedding lookup matrix.
  • rope_deltas (torch.LongTensor of shape (batch_size, ), optional) — The rope index difference between sequence length and multimodal rope.
  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
  • use_cache (`) -- If set to True, past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values`).
  • output_router_logits (bool, optional) — Whether or not to return the logits of all the routers. They are useful for computing the router loss, and should not be returned during inference.
  • use_audio_in_video (bool, optional) — Whether or not use audio track in video, should same as the parameter in process_audio_info.
  • cache_position (`of shape(sequence_length)) -- Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids`, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
  • video_second_per_grid (torch.LongTensor of shape (num_videos), optional) — Number of seconds per grid for each video, used for temporal feature mapping.

Returns

transformers.models.qwen3_omni_moe.modeling_qwen3_omni_moe.Qwen3OmniMoeThinkerCausalLMOutputWithPast or tuple(torch.FloatTensor)

A transformers.models.qwen3_omni_moe.modeling_qwen3_omni_moe.Qwen3OmniMoeThinkerCausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Qwen3OmniMoeConfig) and inputs.

  • rope_deltas (torch.LongTensor of shape (batch_size, ), optional) — The rope index difference between sequence length and multimodal rope.

The Qwen3OmniMoeThinkerForConditionalGeneration forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

>>> from io import BytesIO
>>> from urllib.request import urlopen
>>> import librosa
>>> from qwen_vl_utils import process_vision_info
>>> from transformers import Qwen3OmniMoeProcessor, Qwen3OmniMoeThinkerForConditionalGeneration

>>> thinker = Qwen3OmniMoeThinkerForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B")
>>> processor = Qwen3OmniMoeProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")

>>> conversations = [
>>>         {'role': 'system', 'content': 'You are a helpful voice chat bot, and please respond to me in a casual conversation manner using random voice.'},
>>>         {"role": "user", "content": [
>>>             {"type": "image", "image_url": "https://www.ilankelman.org/stopsigns/australia.jpg"},
>>>             {"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/glass-breaking-151256.mp3"},
>>>         ]},
>>> ]

>>> text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
>>> audios = [ librosa.load(BytesIO(urlopen( conversations[1]['content'][1]['audio_url'] ).read()), sr=self.processor.feature_extractor.sampling_rate) ]
>>> images, videos = process_vision_info(conversations)
>>> inputs = processor(text=text, audios=audios, images=images, videos=videos, return_tensors="pt", padding=True)

>>> # Generate
>>> inputs['use_audio_in_video'] = `True` or `False`
>>> generation = thinker.generate(**inputs, max_new_tokens=2048)
>>> generate_ids = generation[:, inputs.input_ids.size(1):]

>>> response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]

get_audio_features

< >

( input_features: FloatTensor feature_attention_mask: typing.Optional[torch.LongTensor] = None audio_feature_lengths: typing.Optional[torch.LongTensor] = None )

Parameters

  • input_features (torch.FloatTensor) — The tensors corresponding to the input audios.
  • feature_attention_mask (torch.LongTensor, optional) — Mask to avoid performing attention on padding feature indices. Mask values selected in [0, 1]:
  • audio_feature_lengths (torch.LongTensor of shape (num_audios), optional) — The length of feature shape of each audio in LLM.

Encodes audios into continuous embeddings that can be forwarded to the language model.

get_image_features

< >

( pixel_values: FloatTensor image_grid_thw: typing.Optional[torch.LongTensor] = None )

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, image_size, image_size)) — The tensors corresponding to the input images.
  • image_grid_thw (torch.LongTensor of shape (num_images, 3), optional) — The temporal, height and width of feature shape of each image in LLM.

Encodes images into continuous embeddings that can be forwarded to the language model.

get_placeholder_mask

< >

( input_ids: LongTensor inputs_embeds: FloatTensor image_features: typing.Optional[torch.FloatTensor] = None video_features: typing.Optional[torch.FloatTensor] = None )

Obtains multimodal placeholder mask from input_ids or inputs_embeds, and checks that the placeholder token count is equal to the length of multimodal features. If the lengths are different, an error is raised.

get_video_features

< >

( pixel_values_videos: FloatTensor video_grid_thw: typing.Optional[torch.LongTensor] = None )

Parameters

  • pixel_values_videos (torch.FloatTensor of shape (batch_size, num_channels, image_size, image_size)) — The tensors corresponding to the input videos.
  • video_grid_thw (torch.LongTensor of shape (num_videos, 3), optional) — The temporal, height and width of feature shape of each video in LLM.

Encodes videos into continuous embeddings that can be forwarded to the language model.

Qwen3OmniMoeTalkerForConditionalGeneration

class transformers.Qwen3OmniMoeTalkerForConditionalGeneration

< >

( config: Qwen3OmniMoeTalkerConfig )

Parameters

  • config (Qwen3OmniMoeTalkerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The Qwen3 Omni Moe Model for token generation conditioned on other modalities (e.g. image-text-to-text generation).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids = None attention_mask = None use_audio_in_video = None audio_feature_lengths = None video_second_per_grid = None image_grid_thw = None video_grid_thw = None position_ids = None past_key_values = None inputs_embeds = None labels = None use_cache = None output_router_logits = None cache_position = None residual_codes = None trailing_text_hidden = None tts_pad_embed = None generation_step = None talker_input_ids = None **kwargs ) [transformers.modeling_outputs.MoeCausalLMOutputWithPast] or tuple(torch.FloatTensor)

Parameters

  • input_ids (` of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using [AutoTokenizer]. See [PreTrainedTokenizer.encode] and [PreTrainedTokenizer.call] for details.

    What are input IDs?

  • attention_mask (` of shape (batch_size, sequence_length)) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • use_audio_in_video (bool, optional) — If set to True, use the audio in video.
  • audio_feature_lengths (torch.LongTensor of shape (num_audios), optional) — The length of feature shape of each audio in LLM.
  • video_second_per_grid (torch.LongTensor of shape (num_videos), optional) — Number of seconds per grid for each video, used for temporal feature mapping.
  • image_grid_thw (torch.LongTensor of shape (num_images, 3), optional) — The temporal, height and width of feature shape of each image in LLM.
  • video_grid_thw (torch.LongTensor of shape (num_videos, 3), optional) — The temporal, height and width of feature shape of each video in LLM.
  • position_ids (` of shape (batch_size, sequence_length)) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (`) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values returned by the model at a previous stage of decoding, when use_cache=True or config.use_cache=True.

    Only [~cache_utils.Cache] instance is allowed as input, see our kv cache guide. If no past_key_values are passed, [~cache_utils.DynamicCache] will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (` of shape (batch_size, sequence_length, hidden_size)) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • labels (` of shape (batch_size, sequence_length)) — Labels for computing the masked language modeling loss. Indices should either be in [0, …, config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, …, config.vocab_size].
  • use_cache (`) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • output_router_logits (`) — Whether or not to return the logits of all the routers. They are useful for computing the router loss, and should not be returned during inference.
  • cache_position (“ of shape (sequence_length)) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
  • residual_codes (torch.Tensor) — The predicted residual codes of previous step.
  • trailing_text_hidden (torch.Tensor) — Text hidden states from thinker after the first token.
  • tts_pad_embed (torch.Tensor) — Embedding tensor of tts_pad_token_id.
  • generation_step (int) — Generation step since prefill, used to sync with trailing_text_hidden.
  • talker_input_ids (torch.Tensor) — Input ids from thinker, used to compute 3d RoPE.

Returns

[transformers.modeling_outputs.MoeCausalLMOutputWithPast] or tuple(torch.FloatTensor)

A [transformers.modeling_outputs.MoeCausalLMOutputWithPast] or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration ([Qwen3OmniMoeConfig]) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • aux_loss (torch.FloatTensor, optional, returned when labels is provided) — aux_loss for the sparse modules.

  • router_logits (tuple(torch.FloatTensor), optional, returned when output_router_probs=True and config.add_router_probs=True is passed or when config.output_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts).

    Raw router logtis (post-softmax) that are computed by MoE routers, these terms are used to compute the auxiliary loss for Mixture of Experts models.

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a [~cache_utils.Cache] instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The [Qwen3OmniMoeTalkerForConditionalGeneration] forward method, overrides the call special method.

Although the recipe for forward pass needs to be defined within this function, one should call the [Module] instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Qwen3OmniMoePreTrainedModel

class transformers.Qwen3OmniMoePreTrainedModel

< >

( config: PretrainedConfig *inputs **kwargs )

Parameters

  • config (PretrainedConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Qwen3OmniMoePreTrainedModelForConditionalGeneration

class transformers.Qwen3OmniMoePreTrainedModelForConditionalGeneration

< >

( config: PretrainedConfig *inputs **kwargs )

get_chunked_index

< >

( token_indices: Tensor tokens_per_chunk: int remove_index: int ) list[tuple[int, int]]

Parameters

  • token_indices (torch.Tensor of shape (seq_len, )) — A monotonically increasing list of token index values.
  • t_ntoken_per_chunk (int) — Number of tokens per chunk (used as the chunk size threshold).
  • remove_index (int) An index id to subtract from token_indices before chunking —

Returns

list[tuple[int, int]]

A list of tuples, each representing the start (inclusive) and end (exclusive) indices of a chunk in token_indices.

Splits token index list into chunks based on token value ranges.

Given a list of token indices, returns a list of (start, end) index tuples representing slices of the list where the token values fall within successive ranges of t_ntoken_per_chunk.

For example, if t_ntoken_per_chunk is 1000, the function will create chunks such that:

  • the first chunk contains token values < 1000,
  • the second chunk contains values >= 1000 and < 2000, and so on.

get_rope_index

< >

( input_ids: typing.Optional[torch.LongTensor] = None image_grid_thw: typing.Optional[torch.LongTensor] = None video_grid_thw: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None use_audio_in_video: bool = False audio_seqlens: typing.Optional[torch.LongTensor] = None second_per_grids: typing.Optional[torch.Tensor] = None )

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.
  • image_grid_thw (torch.LongTensor of shape (num_images, 3), optional) — The temporal, height and width of feature shape of each image in LLM.
  • video_grid_thw (torch.LongTensor of shape (num_videos, 3), optional) — The temporal, height and width of feature shape of each video in LLM.
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.
  • use_audio_in_video (bool, optional) — If set to True, use the audio in video.
  • audio_seqlens (torch.LongTensor of shape (num_audios), optional) — The length of feature shape of each audio in LLM.
  • second_per_grids (torch.LongTensor of shape (num_videos), optional) — The time interval (in seconds) for each grid along the temporal dimension in the 3D position IDs.

Calculate the 3D rope index based on image and video’s temporal, height and width in LLM.

Explanation: Each embedding sequence contains vision embedding and text embedding or just contains text embedding.

For pure text embedding sequence, the rotary position embedding has no difference with modern LLMs. Examples: input_ids: [T T T T T], here T is for text. temporal position_ids: [0, 1, 2, 3, 4] height position_ids: [0, 1, 2, 3, 4] width position_ids: [0, 1, 2, 3, 4]

For vision and text embedding sequence, we calculate 3D rotary position embedding for vision part and 1D rotary position embedding for text part. Examples: Temporal (Time): 3 patches, representing different segments of the video in time. Height: 2 patches, dividing each frame vertically. Width: 2 patches, dividing each frame horizontally. We also have some important parameters: fps (Frames Per Second): The video’s frame rate, set to 1. This means one frame is processed each second. tokens_per_second: This is a crucial parameter. It dictates how many “time-steps” or “temporal tokens” are conceptually packed into a one-second interval of the video. In this case, we have 25 tokens per second. So each second of the video will be represented with 25 separate time points. It essentially defines the temporal granularity. temporal_patch_size: The number of frames that compose one temporal patch. Here, it’s 2 frames. interval: The step size for the temporal position IDs, calculated as tokens_per_second temporal_patch_size / fps. In this case, 25 2 / 1 = 50. This means that each temporal patch will be have a difference of 50 in the temporal position IDs. input_ids: [V V V V V V V V V V V V T T T T T], here V is for vision. vision temporal position_ids: [0, 0, 0, 0, 50, 50, 50, 50, 100, 100, 100, 100] vision height position_ids: [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1] vision width position_ids: [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1] text temporal position_ids: [101, 102, 103, 104, 105] text height position_ids: [101, 102, 103, 104, 105] text width position_ids: [101, 102, 103, 104, 105] Here we calculate the text start position_ids as the max vision position_ids plus 1.

Qwen3OmniMoeTalkerModel

class transformers.Qwen3OmniMoeTalkerModel

< >

( config: Qwen3OmniMoeTalkerTextConfig )

Parameters

  • config (Qwen3OmniMoeTalkerTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Text part of Qwen3OmniMoe, not a pure text-only model, as DeepStack integrates visual features into the early hidden states.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[transformers.cache_utils.Cache] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None visual_pos_masks: typing.Optional[torch.Tensor] = None deepstack_visual_embeds: typing.Optional[list[torch.Tensor]] = None **kwargs: typing_extensions.Unpack[transformers.modeling_flash_attention_utils.FlashAttentionKwargs] ) transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values returned by the model at a previous stage of decoding, when use_cache=True or config.use_cache=True.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • cache_position (torch.LongTensor of shape (sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
  • visual_pos_masks (torch.Tensor of shape (batch_size, seqlen), optional) — The mask of the visual positions.
  • deepstack_visual_embeds (list[torch.Tensor], optional) — The deepstack visual embeddings. The shape is (num_layers, visual_seqlen, embed_dim). The feature is extracted from the different visual encoder layers, and fed to the decoder hidden states. It’s from the paper DeepStack(https://arxiv.org/abs/2406.04334).

Returns

transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Qwen3OmniMoeConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

    If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The Qwen3OmniMoeTalkerModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Qwen3OmniMoeThinkerTextPreTrainedModel

class transformers.Qwen3OmniMoeThinkerTextPreTrainedModel

< >

( config: PretrainedConfig *inputs **kwargs )

Parameters

  • config (PretrainedConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Qwen3OmniMoeProcessor

class transformers.Qwen3OmniMoeProcessor

< >

( image_processor = None video_processor = None feature_extractor = None tokenizer = None chat_template = None )

Parameters

  • image_processor (Qwen2VLImageProcessor, optional) — The image processor.
  • video_processor (Qwen2VLVideoProcessor, optional) — The video processor.
  • feature_extractor (WhisperFeatureExtractor, optional) — The audio feature extractor.
  • tokenizer (Qwen2TokenizerFast, optional) — The text tokenizer.
  • chat_template (Optional[str], optional) — The Jinja template to use for formatting the conversation. If not provided, the default chat template is used.

Constructs a Qwen2.5Omni processor. Qwen3OmniMoeProcessor offers all the functionalities of Qwen2VLImageProcessor, WhisperFeatureExtractor, and Qwen2TokenizerFast. See the __call__() and decode() for more information.

get_chunked_index

< >

( token_indices: ndarray tokens_per_chunk: int ) list[tuple[int, int]]

Parameters

  • token_indices (np.ndarray) — A monotonically increasing list of token index values.
  • t_ntoken_per_chunk (int) — Number of tokens per chunk (used as the chunk size threshold).

Returns

list[tuple[int, int]]

A list of tuples, each representing the start (inclusive) and end (exclusive) indices of a chunk in token_indices.

Splits token index list into chunks based on token value ranges.

Given a list of token indices, returns a list of (start, end) index tuples representing slices of the list where the token values fall within successive ranges of t_ntoken_per_chunk.

For example, if t_ntoken_per_chunk is 1000, the function will create chunks such that:

  • the first chunk contains token values < 1000,
  • the second chunk contains values >= 1000 and < 2000, and so on.

Qwen3OmniMoeCode2Wav

class transformers.Qwen3OmniMoeCode2Wav

< >

( config: Qwen3OmniMoeCode2WavConfig )

Qwen3OmniMoeCode2WavDecoderBlock

class transformers.Qwen3OmniMoeCode2WavDecoderBlock

< >

( config: Qwen3OmniMoeCode2WavConfig layer_idx )

Qwen3OmniMoeCode2WavTransformerModel

class transformers.Qwen3OmniMoeCode2WavTransformerModel

< >

( config: Qwen3OmniMoeCode2WavConfig )

Parameters

  • config (Qwen3OmniMoeCode2WavConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare Qwen3 Omni Moe Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids = None attention_mask = None position_ids = None past_key_values = None inputs_embeds = None use_cache = None cache_position = None **kwargs ) transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

Parameters

  • input_ids (`of shape(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (`of shape(batch_size, sequence_length)) -- Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]`:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (`of shape(batch_size, sequence_length)) -- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]`.

    What are position IDs?

  • past_key_values (`) -- Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True`.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (`of shape(batch_size, sequence_length, hidden_size)) -- Optionally, instead of passing input_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids` indices into associated vectors than the model’s internal embedding lookup matrix.
  • use_cache (`) -- If set to True, past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values`).
  • cache_position (`of shape(sequence_length)) -- Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids`, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.

Returns

transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Qwen3OmniMoeConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

    If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The Qwen3OmniMoeCode2WavTransformerModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Qwen3OmniMoeTalkerCodePredictorModel

class transformers.Qwen3OmniMoeTalkerCodePredictorModel

< >

( config: Qwen3OmniMoeTalkerCodePredictorConfig )

Parameters

  • config (Qwen3OmniMoeTalkerCodePredictorConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare Qwen3 Omni Moe Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[transformers.cache_utils.Cache] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values returned by the model at a previous stage of decoding, when use_cache=True or config.use_cache=True.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • cache_position (torch.LongTensor of shape (sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.

Returns

transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Qwen3OmniMoeConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

    If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The Qwen3OmniMoeTalkerCodePredictorModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Qwen3OmniMoeTalkerCodePredictorModelForConditionalGeneration

class transformers.Qwen3OmniMoeTalkerCodePredictorModelForConditionalGeneration

< >

( config: Qwen3OmniMoeTalkerCodePredictorConfig )

Parameters

  • config (Qwen3OmniMoeTalkerCodePredictorConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The Qwen3 Omni Moe Model for token generation conditioned on other modalities (e.g. image-text-to-text generation).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids = None attention_mask = None position_ids = None past_key_values = None inputs_embeds = None labels = None use_cache = None cache_position = None generation_steps = None **kwargs ) transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)

Parameters

  • input_ids (`of shape(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (`of shape(batch_size, sequence_length)) -- Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]`:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (`of shape(batch_size, sequence_length)) -- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]`.

    What are position IDs?

  • past_key_values (`) -- Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True`.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • inputs_embeds (`of shape(batch_size, sequence_length, hidden_size)) -- Optionally, instead of passing input_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids` indices into associated vectors than the model’s internal embedding lookup matrix.
  • labels (`of shape(batch_size, sequence_length)) -- Labels for computing the masked language modeling loss. Indices should either be in [0, …, config.vocab_size]or -100 (seeinput_idsdocstring). Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, …, config.vocab_size]`.
  • use_cache (`) -- If set to True, past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values`).
  • cache_position (`of shape(sequence_length)) -- Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids`, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.
  • generation_steps (int) — generation step of code predictor, 0..num_code_groups-1

Returns

transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)

A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Qwen3OmniMoeConfig) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).

  • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The Qwen3OmniMoeTalkerCodePredictorModelForConditionalGeneration forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

< > Update on GitHub