doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.00752 | 90 | [51] Andrei Ivanov, Nikoli Dryden, Tal Ben-Nun, Shigang Li, and Torsten Hoefler. âData Movement is All You Need: A Case Study on Optimizing Transformersâ. In: Proceedings of Machine Learning and Systems 3 (2021), pp. 711â 732.
[52] Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, and Yoshua Bengio. âGated Orthogonal Recurrent Units: On Learning to Forgetâ. In: Neural Computation 31.4 (2019), pp. 765â783. [53] Rudolph Emil Kalman. âA New Approach to Linear Filtering and Prediction Problemsâ. In: (1960).
20
[54] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. âTransformers are RNNs: Fast Autoregressive Transformers with Linear Attentionâ. In: International Conference on Machine Learning. PMLR. 2020, pp. 5156â5165. | 2312.00752#90 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 91 | [55] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. âDiffWave: A Versatile Diffusion Model for Audio Synthesisâ. In: International Conference on Learning Representations. 2021.
[56] Chrysoula Kosma, Giannis Nikolentzos, and Michalis Vazirgiannis. âTime-Parameterized Convolutional Neu- ral Networks for Irregularly Sampled Time Seriesâ. In: arXiv preprint arXiv:2308.03210 (2023).
[57] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. âImageNet Classification with Deep Convolutional Neural Networksâ. In: Advances in Neural Information Processing Systems (NeurIPS) 25 (2012).
[58] Tao Lei. âWhen Attention Meets Fast Recurrence: Training Language Models with Reduced Computeâ. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021, pp. 7633â7648. [59] Tao Lei, Yu Zhang, Sida I Wang, Hui Dai, and Yoav Artzi. âSimple Recurrent Units for Highly Parallelizable
Recurrenceâ. In: arXiv preprint arXiv:1709.02755 (2017). | 2312.00752#91 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 92 | Recurrenceâ. In: arXiv preprint arXiv:1709.02755 (2017).
[60] Mario Lezcano-Casado and David MartÃnez-Rubio. âCheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Groupâ. In: The International Conference on Machine Learning (ICML). 2019.
[61] Yuhong Li, Tianle Cai, Yi Zhang, Deming Chen, and Debadeepta Dey. âWhat Makes Convolutional Models Great on Long Sequence Modeling?â In: The International Conference on Learning Representations (ICLR). 2023. [62] Vasileios Lioutas and Yuhong Guo. âTime-aware Large Kernel Convolutionsâ. In: The International Conference
on Machine Learning (ICML). PMLR. 2020, pp. 6172â6183.
[63] Chris Lu, Yannick Schroecker, Albert Gu, Emilio Parisotto, Jakob Foerster, Satinder Singh, and Feryal Behba- hani. âStructured State Space Models for In-Context Reinforcement Learningâ. In: Advances in Neural Informa- tion Processing Systems (NeurIPS). 2023. | 2312.00752#92 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 93 | [64] Shahar Lutati, Itamar Zimerman, and Lior Wolf. âFocus Your Attention (with Adaptive IIR Filters)â. In: arXiv preprint arXiv:2305.14952 (2023).
[65] Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. âMega: Moving Average Equipped Gated Attentionâ. In: The International Conference on Learning Representations (ICLR). 2023.
[66] Eric Martin and Chris Cundy. âParallelizing Linear Recurrent Neural Nets Over Sequence Lengthâ. In: The International Conference on Learning Representations (ICLR). 2018.
[67] Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, and Yoshua Bengio. âSampleRNN: An Unconditional End-to-End Neural Audio Generation Modelâ. In: The International Conference on Learning Representations (ICLR). 2017. | 2312.00752#93 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 94 | [68] Harsh Mehta, Ankit Gupta, Ashok Cutkosky, and Behnam Neyshabur. âLong Range Language Modeling via Gated State Spacesâ. In: The International Conference on Learning Representations (ICLR). 2023.
[69] Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, and James Bailey. âEfficient Orthogonal Parametri- sation of Recurrent Neural Networks using Householder Reflectionsâ. In: International Conference on Machine Learning. PMLR. 2017, pp. 2401â2409.
[70] Eric Nguyen, Karan Goel, Albert Gu, Gordon Downs, Preey Shah, Tri Dao, Stephen Baccus, and Christopher Ré. âS4ND: Modeling Images and Videos as Multidimensional Signals with State Spacesâ. In: Advances in Neural Information Processing Systems (NeurIPS). 2022.
[71] Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Pa- tel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, et al. âHyenaDNA: Long-range Genomic Sequence Modeling at Single Nucleotide Resolutionâ. In: Advances in Neural Information Processing Systems (NeurIPS). 2023. | 2312.00752#94 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 95 | [72] Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. âIn-context Learning and Induction Headsâ. In: Transformer Circuits Thread (2022). https://transformer-circuits.pub/2022/in-context-learning-and-induction- heads/index.html.
[73] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalch- brenner, Andrew Senior, and Koray Kavukcuoglu. âWaveNet: A Generative Model for Raw Audioâ. In: arXiv preprint arXiv:1609.03499 (2016).
21 | 2312.00752#95 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 96 | 21
[74] Antonio Orvieto, Samuel L Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, and So- ham De. âResurrecting Recurrent Neural Networks for Long Sequencesâ. In: The International Conference on Machine Learning (ICML). 2023.
[75] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc-Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. âThe LAMBADA Dataset: Word Prediction Requiring a Broad Discourse Contextâ. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2016, pp. 1525â1534.
[76] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. âOn the Difficulty of Training Recurrent Neural Net- worksâ. In: International Conference on Machine Learning. 2013, pp. 1310â1318. | 2312.00752#96 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 97 | [77] Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, et al. âRWKV: Reinventing RNNs for the Transformer Eraâ. In: arXiv preprint arXiv:2305.13048 (2023).
[78] Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A Smith, and Lingpeng Kong. âRandom Feature Attentionâ. In: The International Conference on Learning Representations (ICLR). 2021.
[79] Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. âHyena Hierarchy: Towards Larger Convolutional Language Modelsâ. In: The International Conference on Machine Learning (ICML). 2023.
[80] Zhen Qin, Xiaodong Han, Weixuan Sun, Bowen He, Dong Li, Dongxu Li, Yuchao Dai, Lingpeng Kong, and Yiran Zhong. âToeplitz Neural Network for Sequence Modelingâ. In: The International Conference on Learning Representations (ICLR). 2023. | 2312.00752#97 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 98 | [81] Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Lingpeng Kong, Nick Barnes, and Yiran Zhong. âThe devil in linear transformerâ. In: arXiv preprint arXiv:2210.10340 (2022).
[82] Zhen Qin, Weixuan Sun, Hui Deng, Dongxu Li, Yunshen Wei, Baohong Lv, Junjie Yan, Lingpeng Kong, and Yiran Zhong. âCosFormer: Rethinking Softmax in Attentionâ. In: The International Conference on Learning Representations (ICLR). 2022.
[83] Ali Rahimi and Benjamin Recht. âRandom features for large-scale kernel machinesâ. In: Advances in neural information processing systems 20 (2007).
[84] Prajit Ramachandran, Barret Zoph, and Quoc V Le. âSwish: A Self-gated Activation Functionâ. In: arXiv preprint arXiv:1710.05941 7.1 (2017), p. 5. | 2312.00752#98 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 99 | [85] David W Romero, Anna Kuzina, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. âCKConv: Con- tinuous Kernel Convolution For Sequential Dataâ. In: arXiv preprint arXiv:2102.02611 (2021).
[86] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. âWinogrande: An Adversarial Wino- grad Schema Challenge at Scaleâ. In: Communications of the ACM 64.9 (2021), pp. 99â106. | 2312.00752#99 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 100 | [87] George Saon, Ankit Gupta, and Xiaodong Cui. âDiagonal State Space Augmented Transformers for Speech Recognitionâ. In: ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. 2023, pp. 1â5. Imanol Schlag, Kazuki Irie, and Jürgen Schmidhuber. âLinear Transformers are Secretly Fast Weight Program- mersâ. In: The International Conference on Machine Learning (ICML). PMLR. 2021, pp. 9355â9366. [89] Noam Shazeer. âGLU Variants Improve Transformerâ. In: arXiv preprint arXiv:2002.05202 (2020). [90] Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and Denny Zhou. âLarge Language Models can be Easily Distracted by Irrelevant Contextâ. In: The International Conference on Machine Learning (ICML). PMLR. 2023, pp. 31210â31227. Jiaxin Shi, Ke Alexander Wang, and Emily | 2312.00752#100 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 101 | The International Conference on Machine Learning (ICML). PMLR. 2023, pp. 31210â31227. Jiaxin Shi, Ke Alexander Wang, and Emily Fox. âSequence Modeling with Multiresolution Convolutional Mem- oryâ. In: The International Conference on Machine Learning (ICML). PMLR. 2023, pp. 31312â31327. Jimmy TH Smith, Andrew Warrington, and Scott W Linderman. âSimplified State Space Layers for Sequence Modelingâ. In: The International Conference on Learning Representations (ICLR). 2023. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. âRoformer: Enhanced Trans- former with Rotary Position Embeddingâ. In: arXiv preprint arXiv:2104.09864 (2021). | 2312.00752#101 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 102 | [93]
[94] Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. âRetentive network: A successor to transformer for large language modelsâ. In: arXiv preprint arXiv:2307.08621 (2023). Ilya Sutskever, Oriol Vinyals, and Quoc V Le. âSequence to Sequence Learning with Neural Networksâ. In: Advances in Neural Information Processing Systems (NeurIPS) 27 (2014).
22
[96] Corentin Tallec and Yann Ollivier. âCan Recurrent Neural Networks Warp Time?â In: The International Con- ference on Learning Representations (ICLR). 2018.
[97] Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Se- bastian Ruder, and Donald Metzler. âLong Range Arena: A Benchmark for Efficient Transformersâ. In: Inter- national Conference on Learning Representations (ICLR). 2021. | 2312.00752#102 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 103 | [98] Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. âEfficient Transformers: A Surveyâ. In: ACM Com- puting Surveys 55.6 (2022), pp. 1â28.
[99] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Bap- tiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. âLlama: Open and Efficient Foundation Language Modelsâ. In: arXiv preprint arXiv:2302.13971 (2023).
[100] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. âAttention Is All You Needâ. In: Advances in Neural Information Processing Systems (NeurIPS). 2017. | 2312.00752#103 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 104 | [101] Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. âOn Orthogonality and Learning Recur- rent Networks with Long Term Dependenciesâ. In: International Conference on Machine Learning. PMLR. 2017, pp. 3570â3578. Jue Wang, Wentao Zhu, Pichao Wang, Xiang Yu, Linda Liu, Mohamed Omar, and Raffay Hamid. âSelective Structured State-Spaces for Long-form Video Understandingâ. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, pp. 6387â6397.
[102]
[103] Pete Warden. âSpeech Commands: A Dataset for Limited-Vocabulary Speech Recognitionâ. In: ArXiv abs/1804.03209 (2018).
[104] Samuel Williams, Andrew Waterman, and David Patterson. âRoofline: An Insightful Visual Performance Model for Multicore Architecturesâ. In: Communications of the ACM 52.4 (2009), pp. 65â76. | 2312.00752#104 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 105 | [105] Brandon Yang, Gabriel Bender, Quoc V Le, and Jiquan Ngiam. âCondConv: Conditionally Parameterized Con- volutions for Efficient Inferenceâ. In: Advances in Neural Information Processing Systems (NeurIPS) 32 (2019). [106] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. âHellaSwag: Can a Machine Really Finish Your Sentence?â In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguis- tics. 2019.
[107] Shuangfei Zhai, Walter Talbott, Nitish Srivastava, Chen Huang, Hanlin Goh, Ruixiang Zhang, and Josh Susskind. âAn Attention Free Transformerâ. In: arXiv preprint arXiv:2105.14103 (2021).
[108] Michael Zhang, Khaled K Saab, Michael Poli, Tri Dao, Karan Goel, and Christopher Ré. âEffectively Modeling Time Series with Simple Discrete State Spacesâ. In: The International Conference on Learning Representations (ICLR). 2023. | 2312.00752#105 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 106 | [109] Lin Zheng, Chong Wang, and Lingpeng Kong. âLinear complexity randomized self-attention mechanismâ. In: International Conference on Machine Learning. PMLR. 2022, pp. 27011â27041.
[110] Simiao Zuo, Xiaodong Liu, Jian Jiao, Denis Charles, Eren Manavoglu, Tuo Zhao, and Jianfeng Gao. âEfficient Long Sequence Modeling via State Space Augmented Transformerâ. In: arXiv preprint arXiv:2212.08136 (2022).
23
# A Discussion: Selection Mechanism
Our selection mechanism is inspired by and related to concepts such as gating, hypernetworks, and data-dependence. It can also be viewed as related to âfast weightsâ (J. Ba et al. 2016), which connects classical RNNs with the mechanism of linear attention (Schlag, Irie, and Schmidhuber 2021). However, we believe that it is a distinct concept that is worth clarifying. | 2312.00752#106 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 107 | Gating. Gating originally referred to the gating mechanisms of RNNs such as the LSTM (Hochreiter and Schmidhuber 1997) and GRU (J. Chung et al. 2014), or the gated equation (5)n Theorem 1. This was interpreted as a particular mechanism for controlling whether to let an input into the hidden state of an RNN. In particular, this aï¬ects the propagation of signal through time and causes inputs to interact along the sequence length dimension.
However, the concept of gating has since been relaxed in popular usage to simply mean any multiplicative interaction (often with an activation function). For example, elementwise multiplicative components of neural network architectures (that do not interact along sequence length) are now commonly referred to as gated architectures (Hua et al. 2022; Mehta et al. 2023), despite a very diï¬erent meaning than the original RNN sense. Thus we believe the original concept of RNN gating versus the popular usage of multiplicative gating actually have a very diï¬erent semantic meaning.
Hypernetworks. Hypernetworks refer to neural networks whose parameters are themselves generated by smaller neural networks. The original idea (Ha, Dai, and Quoc V. Le 2017) used it in a narrow sense to deï¬ne a large RNN whose recurrent parameters are generated by a smaller RNN. | 2312.00752#107 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 108 | Data-dependence. Similar to hypernetworks, data-dependence can refer to any notion where some parameters of the model depend on the data (Poli et al. 2023).
Example: GLU Activation. To illustrate the issues with these concepts, consider a simple diagonal linear layer ð¦ = Dð¥, where D is a diagonal weight parameter. Now suppose that D is itself generated from a linear transformation of ð¥, with an optional nonlinearity: D = ð(W ð¥). Since it is diagonal, the multiplication becomes an elementwise product: ð¦ = ð(W ð¥)â¦ð¥.
This is a rather trivial transformation, yet it technically satisï¬es the common meanings of gating (since it has a multiplicative âbranchâ), hypernetworks (since the parameter D is generated by another layer), and data-dependent (since D depends on the data ð¥). However, this in fact simply deï¬nes a GLU function, which is so simple that it is often considered just an activation function (Dauphin et al. 2017; Shazeer 2020) instead of a meaningful layer. | 2312.00752#108 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 109 | Selection. Thus, while selection mechanisms could be considered a special case of ideas such as architectural gating, hypernetworks, or data-dependence, so can an enormous range of other constructionsâessentially anything with a multiplication, including standard attention mechanisms (Bahdanau, Cho, and Bengio 2015; Vaswani et al. 2017) as wellâand we ï¬nd it uninformative to think of them as such. | 2312.00752#109 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 110 | Instead, we view it as most closely related to the gating mechanism of traditional RNNs, which is a special case (Theorem 1) and also has a deeper history of connections to SSMs through variable (input-dependent) discretization of â (Funahashi and Nakamura 1993; Gu, Dao, et al. 2020; Tallec and Ollivier 2018). We also eschew the term âgatingâ in favor of selection to clarify the overloaded use of former. More narrowly, we use selection to refer to the mechanistic action of a model to select or ignore inputs and facilitate data interaction along the sequence length (Section 3.1). Beyond selective SSMs and gated RNNs, other examples may include input-dependent convolutions (Kosma, Nikolentzos, and Vazirgiannis 2023; Lioutas and Guo 2020; Lutati, Zimerman, and Wolf 2023; Yang et al. 2019) and even attention.
24
# B Related Work
We overview several prior works related to our methods. We mention that some of the most closely related models include recurrent layers such as S4, S5, and quasi-RNNs; as well as end-to-end architectures such as H3, RetNet, and RWKV.
# B.1 S4 Variants and Derivatives | 2312.00752#110 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 111 | # B.1 S4 Variants and Derivatives
We describe a brief overview of some structured SSMs from past work, particularly those that have a relation to our method.
⢠S4 (Gu, Goel, and Ré 2022; Gu, Johnson, Goel, et al. 2021) introduced the ï¬rst structured SSM, describing diagonal structure and diagonal plus low-rank (DPLR). It focused on eï¬cient convolutional algorithms for DPLR SSMs due to a connection to continuous-time online memorization (HIPPO) (Gu, Dao, et al. 2020).
⢠DSS (Gupta, Gu, and Berant 2022) ï¬rst discovered the empirical eï¬ectiveness of diagonal structured SSMs by approximating the HIPPO initialization. This was expanded on theoretically in S4D (Gu, Gupta, et al. 2022). | 2312.00752#111 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 112 | ⢠S5 (Smith, Warrington, and Linderman 2023) independently discovered the diagonal SSM approximation, and is the ï¬rst S4 model to be computed recurrently with the parallel scan. However, this required lowering the eï¬ective state dimension, which they accomplished by switching the SSM dimensions from a SISO (single-input single-output) to MIMO (multi-input multi-output) formulation. Our proposed S6 shares the scan, but diï¬ers by (i) keeping the SISO dimensions, which provides a larger eï¬ective recurrent state, (ii) using a hardware-aware algorithm to overcome the computation issue, (iii) adding the selection mechanism.
Lu et al. (2023) applied S5 to meta-RL in order to handle resetting the SSM state between episode trajectories. Their mechanism can be viewed as a particular hard-coded instance of a selection mechanism, where A is manually set to 0, instead of our learnable mechanism that depends on the input. It would be interesting to apply selective SSMs generically to this setting and probe if the model has learned to automatically reset its state on episode boundaries. | 2312.00752#112 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 113 | ⢠Mega (Ma et al. 2023) introduced a simpliï¬cation of S4 to be real- instead of complex- valued, giving it an interpretation of being an exponential moving average (EMA). They additionally make an interesting connection of the discretization step of SSMs to an EMA damping term. Contrary to ï¬ndings in the original S4 papers, this was the ï¬rst model to show that real-valued SSMs are empirically eï¬ective in certain settings or when combined with diï¬erent architectural components.
⢠Liquid S4 (Hasani et al. 2023) is also motivated by augmenting S4 with an input-dependent state transition. From this perspective it shares similarity to selection mechanisms, although in a limited form which is still computed convolutionally and close to LTI. | 2312.00752#113 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 114 | ⢠SGConv (Y. Li et al. 2023), Hyena (Poli et al. 2023), LongConv (Fu et al. 2023), MultiresConv (J. Shi, K. A. Wang, and Fox 2023), and Toeplitz Neural Network (Qin, Han, W. Sun, He, et al. 2023) all focus on the convolutional representation of S4 and create global or long convolution kernels with diï¬erent parameterizations. However, these methods cannot do fast autoregressive inference directly.
Notably, all of these methods, and all other structured SSMs that we are aware of, have been non-selective and usually strictly LTI (linear time invariant).
# B.2 SSM Architectures
We use SSM architectures or state space neural networks (SSNN) to refer to deep neural network architectures incorporating one of the previous SSMs as a black box layer. | 2312.00752#114 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 115 | We use SSM architectures or state space neural networks (SSNN) to refer to deep neural network architectures incorporating one of the previous SSMs as a black box layer.
⢠GSS (Mehta et al. 2023) was the ï¬rst gated neural network architecture incorporating SSMs. It is motivated by the gated attention unit (GAU) of Hua et al. (2022) and looks quite similar to our block, except with additional projections. Most importantly, its projection contracts the model dimension to reduce the state size of the SSM, while ours expands the model dimension in order to increase the state size, based on the motivation in Section 3.1.
25
⢠Mega (Ma et al. 2023) combined the EMA simpliï¬cation of S4 described above into a hybrid architecture using an eï¬cient attention approximation.
⢠H3 (Dao, Fu, Saab, et al. 2023) is motivated by combining S4 with linear attention (Katharopoulos et al. 2020). It is the ï¬rst to generalize this formulation of linear attention to more general recurrences, which is also the basis of later architectures. | 2312.00752#115 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 116 | ⢠Selective S4 (J. Wang et al. 2023) incorporates S4 as a black box to generate a binary mask which is multiplied on the input. While sharing the âselectionâ name, we consider this an architectural modiï¬cation that is closer to architectural gating than a selection mechanism (Appendix A). For example, we hypothesize that it would not solve the Selective Copying task because simply masking out the irrelevant inputs does not aï¬ect the spacing between the relevant ones (indeed, the Selective Copying task can even be viewed as coming pre-masked if the noise tokens are embedded to 0).
⢠RetNet (Y. Sun et al. 2023) is also based on Linear Attention and very similar to H3, but reduces the inner S4 layer to a special case where the state dimension is ð = 1. Although not framed as such, its recurrence can be viewed as a special case of a linear SSM. | 2312.00752#116 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 117 | Its primary source of improvement is using a linear attention with large head dimension, which can be viewed as another method to perform input-dependent state expansion. Using a larger head dimension in the context of linear attention variants was ï¬rst done by H3, but not extensively used since this requires a proportional amount of extra computation. RetNet avoids this with an alternate way to parallelize the computation with a variant of standard multi-head attention instead of convolutions, made feasible by their particular special case of SSMs which acts as a simple EMA.
⢠RWKV (B. Peng et al. 2023) is another recent RNN designed for language modeling. It is based on AFT (attention-free Transformer (S. Zhai et al. 2021)), another variant of linear attention. Its main âWKVâ mechanism involves LTI recurrences and can be seen as the ratio of two SSMs.
We also highlight the gated attention unit (GAU) from Hua et al. (2022), which was motivated by combining the Transformerâs MHA and MLP blocks together and was an inspiration for our architecture (Section 3.4) combining the H3 and MLP blocks.
# B.3 Relationship to RNNs
RNNs and SSMs are broadly related, as they both involve the concepts of recurrence on a latent state. | 2312.00752#117 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 118 | # B.3 Relationship to RNNs
RNNs and SSMs are broadly related, as they both involve the concepts of recurrence on a latent state.
Several older RNNs such as the strongly typed RNN (Balduzzi and Ghifary 2016), quasi-RNN (QRNN) (Bradbury et al. 2016), and simple recurrent unit (SRU) (Lei 2021; Lei et al. 2017) involve forms of gated RNNs without time-wise nonlinearities. Because of the connections of gating mechanisms and selection mechanisms, these can be viewed as cases of selective SSMs, and are thus more powerful in a sense than the family of LTI structured SSMs above. The main diï¬erences are:
⢠They do not use state expansion (ð = 1) or selective B, C parameters, both of which are important for performance (Section 4.6).
⢠They use a heuristic gating mechanism, which we generalize as a consequence of the selection mechanism + discretization (Theorem 1). The connections to principled SSM theory provides better parameterizations and initializations (Section 3.6). | 2312.00752#118 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 119 | Additionally, older RNNs famously suï¬ered from eï¬ciency issues and the vanishing gradients problem (Pascanu, Mikolov, and Bengio 2013), both caused by their sequential nature. The latter could be solved for some of the above RNNs by leveraging the parallel scan (Martin and Cundy 2018), but the former was diï¬cult without theory later developed for SSMs. For example, modern structured SSMs diï¬er in more careful parameterization of the recurrent dynamics inspired by classical SSM theory (e.g. through discretization (Gu, Johnson, Goel, et al. 2021; Gu, Johnson, Timalsina, et al. 2023)), or direct analysis (Orvieto et al. 2023)).
We also note that there is a long line of work on orthogonal RNNs (Arjovsky, Shah, and Bengio 2016; Henaï¬, Szlam, and LeCun 2016; Lezcano-Casado and MartÃnez-Rubio 2019; Mhammedi et al. 2017; Vorontsov et al. 2017)
26 | 2312.00752#119 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 120 | 26
which are motivated by constraining the A transition matrix to be orthogonal or unitary, in order to control its eigenvalues and prevent the vanishing gradient problem. However, these had other limitations; we believe that these stem from the fact that orthogonal/unitary RNNs are also LTI. For example, they are almost always evaluated on the Copying task which they can solve perfectly, but observed to struggle on the Selective Copying task (Jing et al. 2019).
# B.4 Linear Attention | 2312.00752#120 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 121 | # B.4 Linear Attention
The Linear Attention (LA) (Katharopoulos et al. 2020) framework is an important result popularizing kernel attention and showing how it relates to recurrent autoregressive models. Many variants have proposed alternative kernels and other modiï¬cations. Random Feature Attention (RFA) (H. Peng et al. 2021) chooses the kernel feature map to approximate softmax attention (i.e. the exp feature map) using the random Fourier feature approximation of Gaussian kernels (Rahimi and Recht 2007). Performer (Choromanski et al. 2021) ï¬nds an approximation to the exponential kernel involving only positive features, which also allows the softmax normalization term. TransNormer (Qin, Han, W. Sun, D. Li, et al. 2022) showed that the LA denominator term can be unstable and proposed replacing it with a LayerNorm. cosFormer (Qin, W. Sun, et al. 2022) augments RFA with a cosine reweighting mechanism that incorporates positional information to emphasize locality. Linear Randomized Attention (Zheng, C. Wang, and L. Kong 2022) generalize RFA from the perspective of importance sampling, and generalize it to provide better estimates of the full softmax kernel (rather than just the exp-transformed numerator). | 2312.00752#121 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 122 | Aside from kernel attention, many other variants of eï¬cient attention exist; the survey Tay, Dehghani, Bahri, et al. (2022) oï¬ers an extensive categorization of many of these.
# B.5 Long Context Models
Long context has become a popular subject, and several recent models have claimed to scale to longer and longer sequences. However, these are often from a computational standpoint and have not been extensively validated. These include:
⢠Recurrent Memory Transformer (Bulatov, Kuratov, and Burtsev 2023), a lightweight wrapper around a Transformer backbone. It showed ability to generalize up to 1M sequences but only on synthetic memorization tasks; their main result is similar to our Induction Heads extrapolation experiment (Table 2).
⢠LongNet (Ding et al. 2023), which claimed to scale to 1B length but only evaluated on length < 100ð¾ for actual tasks.
⢠Hyena and HyenaDNA (Nguyen, Poli, et al. 2023; Poli et al. 2023), which claimed to leverage up to 1M context. However, their experiments trained on proportionally more data at longer contexts, making it hard to conclude if quality improvements at 1M context are due to context length or due to more data and computation. | 2312.00752#122 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 123 | ⢠Sparse Transformer (Child et al. 2019) showed a proof-of-concept of using a strided sparse attention Transformer to model audio waveforms of length 220 = 1048576, although did not discuss performance tradeoï¬s when controlling for computation and model size.
In contrast, we believe this work presents one of the ï¬rst approaches to meaningfully demonstrate increasing performance with longer context.
# C Mechanics of Selective SSMs
Proof of Theorem 1. Consider a selective SSM (Algorithm 2) with ð = 1, A = â1, B = 1, ð â = ð«ððð¾ðºð(ð¥), ðâ = ððð¿ððð
ðð. The corresponding continuous-time SSM (1) is
â(ð¡) = ââ(ð¡) + ð¥(ð¡)
which is also called a leaky integrator.
27
The discretization step size is
The discretization step size is | 2312.00752#123 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 124 | which is also called a leaky integrator.
27
The discretization step size is
The discretization step size is
# âð¡ = ðâ(ð¯ðºððºðð¾ðð¾ð + ð â(ð¥ð¡))
= ððð¿ððð
ðð(ð¯ðºððºðð¾ðð¾ð + ð«ððð¾ðºð(ð¥ð¡)) = ððð¿ððð
ðð(ð«ððð¾ðºð(ð¥ð¡))
where we observe that the parameter can be viewed as a learnable bias and folded into the linear projection.
Now applying the zero-order hold (ZOH) discretization formulas: | 2312.00752#124 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 125 | where we observe that the parameter can be viewed as a learnable bias and folded into the linear projection.
Now applying the zero-order hold (ZOH) discretization formulas:
Að¡ = exp(âA) = 1 1 + exp(ð«ððð¾ðºð(ð¥ð¡) = ð(âð«ððð¾ðºð(ð¥ð¡)) = 1 â ð(ð«ððð¾ðºð(ð¥ð¡)) Bð¡ = (âA)â1(exp(âA) â I) â
âB = â(exp(âA) â I) = 1 â A = ð(ð«ððð¾ðºð(ð¥ð¡)).
Thus the final discrete recurrence (2a) is | 2312.00752#125 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 127 | as desired.
# D Hardware-aware Algorithm For Selective SSMs
Without input-dependent selectivity, SSMs can be eï¬ciently implemented as a convolution (Dao, Fu, Saab, et al. 2023; Gu, Goel, and Ré 2022), which leverages the fast Fourier transform (FFT) as primitive. With selectivity, SSMs are no-longer equivalent to convolution, but we leverage the parallel associative scan. While SSM scans are theoretically eï¬cient (ð(ðµð¿ð·ð) FLOPs, scaling linear in ð¿), training foundation models with selective SSMs requires them to be eï¬cient on modern hardware (GPUs) as well. We describe how we use kernel fusion and recomputation to make SSM scan fast and memory-eï¬cient. We evaluate the speed of our scan implementation compared to convolution and attention in Section 4.5, showing that it is up to 7à times faster than attention at sequence length 32K, and is as memory-eï¬cient as the best attention implementation (FlashAttention). | 2312.00752#127 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 128 | Speed. On modern hardware accelerators (GPUs) most operations (except matrix multiply) are bounded by memory-bandwidth (Dao, Fu, Ermon, et al. 2022; Ivanov et al. 2021; Williams, Waterman, and Patterson 2009). This the case with our scan operation, and we use kernel fusion to reduce the amount of memory IOs, leading to signiï¬cant speedup compared to a standard implementation.
The standard way to implement the scan algorithm in Section 3.2 is to prepare the scan input A, B of size (ðµ, ð¿, ð·, ð) in GPU HBM (high-bandwidth memory, commonly referred to as GPU memory), call a parallel associative scan implementation to write the scan output of size (ðµ, ð¿, ð·, ð) to GPU HBM, then multiply that scan output with C to produce an output of size (ðµ, ð¿, ð·). However, this requires the number of memory reads/writes on the order of ð(ðµð¿ð·ð). We can instead fuse the discretization step, the scan, and the multiplication with C into one kernel: | 2312.00752#128 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 129 | 1. We read in ð(ðµð¿ð· + ð·ð) bytes of memory (â, A, B, C) from slow HBM to fast SRAM.
2. We discretize to produce A, B of size (ðµ, ð¿, ð·, ð) in SRAM.
3. We perform a parallel associative scan, yielding intermediate states of size (ðµ, ð¿, ð·, ð) in SRAM.
4. We multiply and sum with C, producing outputs of size (ðµ, ð¿, ð·) and write it to HBM.
This way, we reduce IOs by a factor of ð(ð) (the state dimension), which in practice speeds up the operation by 20-40 times (Section 4.5).
28
Table 11: (Induction heads.) Models are trained on sequence length 2° = 256, and tested on various sequence lengths of 2° = 64 up to 2° = 1048576. Y denotes perfect generalization accuracy, while X denotes out of memory. | 2312.00752#129 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 130 | Model Params Test Accuracy (%) at Sequence Length 26 7 28 29 210 gl 212 913 214915216 917918919920 MHA-Abs 137K v 99.6 100.0 58.6 266 188 98 10.9 7.8 X x x x x x MHA-RoPE = 137K v v 100.0 83.6 31.3 184 8.6 9.0 5.5 xX x x x x x MHA-xPos 137K v v 100.0 99.6 67.6 254 7.0 9.0 78 =X x x x x x H3 153K v v 100.0 80.9 39.5 238 148 82 59 66 82 47 82 63 74 Hyena 69M* 977 Vo 100.0 Vv 441 125 66 5.1 70 #59 66 66 59 63 98 Mamba 74K v v 100.0 Vv v v v v v v v v v v v
â Most of the parameters are in learnable positional encodings.
For sequence length ð¿ too long where we cannot ï¬t the sequence in SRAM (which is much smaller than HBM), we split the sequences into chunks and perform the fused scan on each chunk. As long as we have the intermediate scan states, we can continue the scan with the next chunk. | 2312.00752#130 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 131 | Memory. We describe how we use the classical technique of recomputation to reduce the total amount of memory required to train selective SSM layers.
From the way we fuse the forward pass, we do not save the intermediate states of size (ðµ, ð¿, ð·, ð) to avoid memory blowup. However, these intermediate states are necessary for the backward pass to compute gradients. We instead recompute those intermediate states in the backward pass. Since the inputs â, A, B, C and output gradient read from HBM to SRAM are of size ð(ðµð¿ð + ð·ð), and the input gradients are also of size ð(ðµð¿ð + ð·ð), recomputation avoids the cost of reading ð(ðµð¿ðð·) elements from HBM. This means that recomputation of the SSM states in the backward pass speeds up the computation compared to storing them and reading them from HBM. | 2312.00752#131 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 132 | Beyond optimizing for the memory requirement of just the scan operation, we also use recomputation to optimize the memory requirement of the entire selective SSM block (input projection, convolution, activation, scan, output projection). In particular, we do not save intermediate activations that take a lot of memory but are fast to recompute (e.g. output of activation function or short convolution). As a result, the selective SSM layer has the same memory requirement as an optimized Transformer implementation with FlashAttention. In particular, each attention layer (FlashAttention) stores around 12 bytes of activations per token, an each MLP layer stores around 20 bytes of activations per token, for a total of 32 bytes ((assuming mixed-precision training in FP16 or BF16)). Each selective SSM stores around 16 bytes of activations per token. Hence two layers of selective SSMs have around the same activation memory as an attention layer and an MLP layer.
# E Experimental Details and Additional Results
# E.1 Synthetic Tasks
Selective Copying. Our setting is on sequences of length 4096, with a vocab size of 16 possible tokens (including the white ânoiseâ token from Figure 2) and requiring models to memorize 16 âdataâ tokens. We use 2 layer models with a model dimension of ð· = 64. | 2312.00752#132 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 133 | Models are trained for 400K steps at a constant learning rate of 0.0001 with a batch size of 64.
Induction Heads. Training consists of randomly generating data every step, with a batch size of 8. We choose an âepochâ size of 8192 steps, and track the accuracy on ï¬xed validation sets (also randomly generated) of each target sequence length. For the MHA-Abs and Mamba models, results are reported after the 25th epoch (8192 à 25 = 204800 steps). For the MHA-RoPE and MHA-xPos models, results are reported after the 50th epoch (8192 à 50 = 409600 steps). For the LTI H3 and Hyena models, results are reported after the 10th epoch (81920 steps) because they had converged by then and failed to improve further.
29
Table 12: (Scaling Law Model Sizes.) Our model sizes and hyperparameters for scaling experiments. (Model dimension and number of heads applies only to Transformer models.) | 2312.00752#133 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 134 | 29
Table 12: (Scaling Law Model Sizes.) Our model sizes and hyperparameters for scaling experiments. (Model dimension and number of heads applies only to Transformer models.)
Params ð_ððð¢ððð ð_ððððð ð_ððððð / ð_ðððð Training steps Learning Rate Batch Size Tokens 125M 350M 760M 1.3B 12 24 24 24 768 1024 1536 2048 12 / 64 16 / 64 16 / 96 32 / 64 4800 13500 29000 50000 6e-4 3e-4 2.5e-4 2e-4 0.5M tokens 0.5M tokens 0.5M tokens 0.5M tokens 2.5B 7B 15B 26B | 2312.00752#134 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 135 | We use the Adam optimizer with no weight decay. All models are trained at constant learning rates 2ð â 4 and 1ð â 3, and the better results are reported for each model (2ð â 4 for all models except Mamba). The attention and Hyena models did not learn at LR 1ð â 3. H3 learned at both LRs, but interestingly generalized better to shorter sequences at the smaller LR of 2ð â 4. Mamba learned at both LRs, but extrapolated better at the larger LR of 1ð â 3.
# E.2 Language Modeling
# E.2.1 Scaling Law Details
All models were trained on the Pile.
Model Sizes. Table 12 speciï¬es the model sizes we use for scaling laws. This is taken directly from the GPT3 speciï¬cations (Brown et al. 2020), with very minor modiï¬cations. First, we changed the batch size of the 1.3B model from 1M tokens to 0.5M tokens, since we did not use enough parallelization to require the larger batch size. Second, we changed the number of training steps and total tokens to roughly match Chinchilla scaling laws (Hoï¬mann et al. 2022), which specify that training tokens should increase proportionally to model size. | 2312.00752#135 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 136 | Training Recipes. All models used the AdamW optimizer with
⢠gradient clip value 1.0
⢠weight decay 0.1
no dropout
linear learning rate warmup with cosine decay
By default, the peak learning rate is the GPT3 speciï¬cation.
We give several models an âimproved recipeâ, inspired by changes adopted by popular large language models such as PaLM (Chowdhery et al. 2023) and LLaMa (Touvron et al. 2023). These include:
⢠linear learning rate warmup with cosine decay to 1ð â 5, with a peak value of 5à the GPT3 value
no linear bias terms
RMSNorm instead of LayerNorm
⢠AdamW hyperparameter ð½ = (.9, .95) (the GPT3 value) instead of the PyTorch default of ð½ = (.9, .999)
Architecture and Training Details. Our models are: ⢠Transformer: The standard Transformer based on GPT3 (Table 12).
⢠Transformer++: A Transformer with an improved architecture, namely rotary positional encodings (Su et al. 2021) and SwiGLU MLP (Shazeer 2020), and the improved training recipe above. | 2312.00752#136 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 137 | ⢠Hyena: Interleaving a Hyena block (the H3 block with S4 replaced by a global convolution parameterized by an MLP) with standard MLP blocks. The MLP blocks have expansion factor 2 instead of 4 and the number of layers is correspondingly increased by 1.5à to preserve parameter count.
30
⢠H3++: The H3 architecture with a few modiï¬cations, including (i) using the same âthinâ Hyena dimensions above (ii) the improved training recipe above (iii) a linear attention head dimension of 8.
⢠RWKV: The default RWKV model from B. Peng et al. (2023), including its modiï¬ed MLP block. We also used as much of its speciï¬ed training recipe as possible, such as increasing the learning rates by 2à or 3à on certain parameters.
⢠RetNet: The default RetNet model from Y. Sun et al. (2023). We also gave it the improved training recipe above.
⢠Mamba: The standard Mamba architecture, with the improved training recipe.
# E.2.2 Additional Scaling Law Ablations
We perform additional ablations on the architecture using the same protocol as the 2k context length scaling laws in Figure 4 (Left). | 2312.00752#137 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 138 | # E.2.2 Additional Scaling Law Ablations
We perform additional ablations on the architecture using the same protocol as the 2k context length scaling laws in Figure 4 (Left).
Mamba Architecture: Interleaving Blocks. We test the eï¬ect of diï¬erent architectural blocks combined with the Mamba block. We focus on the viewpoint that the Mamba block is simply the standard SwiGLU block with an extra ð¼ððð â ð²ð²ð¬ path added. This leads to two natural ablations:
⢠What if the Mamba block is interleaved with a standard MLP block, instead of stacked homogenously? This can also be interpreted as taking Mamba and removing half of the SSMs.
⢠What if the Mamba block is interleaved with MHA (multi-head attention) blocks? This can also be interpreted as taking a Transformer with SwiGLU MLPs (i.e. what we call Transformer++) and simply adding SSMs to the MLP blocks. | 2312.00752#138 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 139 | Figure 9 (Right) shows these variants compared to the original (homogenous) Mamba architecture. Interestingly, neither change matters too much. The Mamba-MLP architecture is only slightly worse, and still better than all models except Transformer++. The Mamba-MHA architecture is only slightly better, which is somewhat surprising in light of the fact that many recent works have found that combining (LTI) SSMs with Attention can lead to substantial improvements (Dao, Fu, Saab, et al. 2023; Fathi et al. 2023; Fathullah et al. 2023; Saon, Gupta, and Cui 2023; Zuo et al. 2022).
H3 Architecture: Training Recipes. Next we ablate diï¬erences between the Hyena and H3++ models, our weakest and strongest models outside of Transformer++ and Mamba, particularly to isolate the eï¬ect of training recipes.
⢠Hyena: The Hyena block with its original architecture and GPT3 training recipe (same as Figure 4).
⢠Hyena+: The same architecture but with the improved training recipe described above.
⢠H3+: The same architecture as Hyena+ but with the Hyena convolution kernel swapped out for S4D convolution kernel. | 2312.00752#139 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 140 | ⢠H3+: The same architecture as Hyena+ but with the Hyena convolution kernel swapped out for S4D convolution kernel.
⢠H3++: The same as H3+, but with a linear attention head dimension of 8. This increases computation inside the SSM recurrence but does not increase parameters.
Our general convention is that âModel+â represents the base model with the improved training recipe, and âModel++â also allows for architectural changes.
Figure 9 (Right) shows that
A large improvement is achieved by the improved training recipe, which was used for many of the models in the
main Figure 4 (RetNet, H3++, Transformer++, Mamba).
The choice of the inner LTI SSM does not matter (e.g. Hyena vs. S4), consistent with ï¬ndings throughout this
paper.
The head dimension expansion improves performance, consistent with one of our main themes that expanded
state dimension improves performance for SSMs (Section 3).
31 | 2312.00752#140 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 141 | paper.
The head dimension expansion improves performance, consistent with one of our main themes that expanded
state dimension improves performance for SSMs (Section 3).
31
Scaling Laws on The Pile (Sequence Length 2048) Scaling Laws on The Pile (Sequence Length 2048) ââ Mamba Hyena Mamba-mLp | = â Hyenas ââ Members |g ââ He a â He 3 Sox! = 2104 ext? 5 2S 7x0 Ea 1 1 1 1 10 30 10° 10â FLOPS (log scale) FLOPs (log scale)
s 5 2 3
2 = 3 8
Figure 9: (Scaling laws: extra ablations.) (Left) Instead of (Right) Instead of
# E.2.3 Downstream Evaluation Details
This pretraining procedure is the same as the scaling law protocol, but extended to 300B tokens. For the 1.3B model, we use a batch size of 1M tokens to be consistent with the GPT3 speciï¬cations. We report the perplexity on the Pile validation set, and for this metric only compare to models trained on the same dataset and with the same tokenizer, in particular Pythia and RWKV. | 2312.00752#141 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 142 | For downstream evaluation, we use the LM evaluation harness from EleutherAI (L. Gao, Tow, et al. 2021), as done by most work in this area. We evaluate on the following tasks/datasets that measure common sense reasoning:
⢠LAMBADA (Paperno et al. 2016).
⢠HellaSwag (Zellers et al. 2019).
⢠PIQA (Bisk et al. 2020).
⢠ARC-challenge (P. Clark et al. 2018).
⢠ARC-easy: an easy subset of ARC-challenge.
⢠WinoGrande (Sakaguchi et al. 2021).
We report accuracy for LAMBADA, WinoGrande, PIQA, and ARC-easy, and accuracy normalized by sequence length for HellaSwag and ARC-challenge (since normalized accuracy is higher for almost all models for these task).
# E.3 DNA Modeling
# E.3.1 Pretraining Details
We describe the dataset and training procedure of the HG38 pretraining task in more detail. | 2312.00752#142 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 143 | # E.3 DNA Modeling
# E.3.1 Pretraining Details
We describe the dataset and training procedure of the HG38 pretraining task in more detail.
The dataset follows the splits from the prior Enformer work on genomics (Avsec et al. 2021); the training split contains a total of ð = 34021 segments of length 217 = 131072 that cover the genome, for a total of approximately 4.5 billion tokens (DNA base pairs). These segments are pairs of (chromosome number, starting index, ending index), and can be extended if necessary (e.g. to get longer segments). We deviate from HyenaDNA when the training sequence length is not 217. HyenaDNA always takes a ï¬xed sub-segment (e.g. the beginning or middle of the prescribed segment), and thus for any training sequence length each epoch is ï¬xed to 34021 samples and doesnât necessarily go through the whole genome. On the other hand, we use the entire training data: ⢠When the context length ð¿ is less than (or equal to) 217, we divide up each segment into non-overlapping
sub-segments of length ð¿, so that there are ð Ã 217 ð¿ total samples and ð Ã 217 â 4.5ðµ tokens per epoch. | 2312.00752#143 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 144 | ⢠When the context length ð¿ is greater than 217, we turn each segment into two samples, one that begins with the prescribed segment and one that ends with the prescribed segment. Thus each epoch has 2ð items and 2ðð¿
32
tokens per epoch. For example, at sequence length 218 = 262144 there are 4Ã as many tokens as the default, and at sequence length 220 there are 16Ã as many tokens.
Other training details generally follow the same protocol as our language modeling experiments (Appendix E.2). For example, we use the AdamW with (ð½1, ð½2) = (0.9, 0.95), no dropout, weight decay 0.1. We use a cosine learning rate scheduler with linear warmup for 10% of total steps.
# E.3.2 Scaling: Model Size Details
Models. The models we consider are: ⢠Transformer++: a Transformer with improved architecture, notably the usage of RoPE positional encodings (Su et al. 2021). Informally, we found these to be noticeably better than vanilla positional encodings from (Vaswani et al. 2017). | 2312.00752#144 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 145 | ⢠HyenaDNA: the Hyena model from Nguyen, Poli, et al. (2023) and Poli et al. (2023), which is roughly a Transformer with the MHA block replaced by an H3 block using a global convolution parameterized by an MLP.
⢠Mamba: the standard Mamba architecture.
Model Sizes. We use the following model sizes.
Blocks Model Dimension Params (Approx.) 4 64 250K 700K 1.4M 3.5M 7.0M 19.3M 40.7M 5 96 6 128 7 192 8 256 10 384 12 512
Note that the number of blocks for Mamba is doubled, because one Transformer âlayerâ includes both the MHA and MLP blocks (and similarly for Hyena), which requires two Mamba blocks to match parameters (Section 3.4). | 2312.00752#145 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 146 | Training. For each model (Transformer++, HyenaDNA, Mamba), we swept the learning rate across {1ð â 3, 2ð â 3, 4ð â 3, 8ð â 3}. The optimal Transformer and HyenaDNA learning rates were 2e-3 across all sizes. The optimal Mamba learning rate was 8e-3; note that Mamba performed better than baselines with matched learning rates (2e-3), but was more stable and improved even more at higher learning rates. (Furthermore, as this LR is on the upper range of the sweep, it is possible that our results are still suboptimal.)
Note that, in contrast to standard LM scaling laws (Table 12), our LR held constant across model sizes for simplicity. The optimal LR should go down for larger models, but we didnât ï¬nd a noticeable eï¬ect at the small model sizes (at most a few million parameters) we considered. | 2312.00752#146 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 147 | E.3.3 Scaling: Context Length Details We use a total batch size of 224 â 16ð tokens per training step, for every sequence length (e.g. at length 220 there are 16 segments per batch and at length 210 there are 16384 segments per batch). This is a large batch size relative to the model size by usual LM standards, but note that a batch size of 223 is the minimum possible on a machine with 8 GPUs and sequence length of 220, and that HyenaDNA used much larger batches of 228. The learning rate used was 0.008 for Mamba and 0.001 for HyenaDNA; we initially attempted to use the same learning rate of 0.002 from the previous section for HyenaDNA, but found that it was unstable at the longest context length.
Sequence Length Warmup. Following (Nguyen, Poli, et al. 2023), we use sequence length warmup (SLW) during pretraining. We choose a simple schedule of 2 epochs at each power-of-two sequence length starting from 210 = 1024. (Note that because of how data is curated, at the longest sequence lengths more steps and tokens are spent proportionally. In particular, each stage up to length 217 processes the same number of tokens, but 4Ã as many tokens are processed at length 218, 8Ã as many at length 219, and 16Ã as many at length 220.) | 2312.00752#147 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 148 | Unlike HyenaDNA, we always control for the number of tokens per gradient update, so the batch size is successively halved as the sequence lengths are doubled in each stage.
33
Table 13: (Great Apes DNA Classification.) Accuracy after fine-tuning on sequences of length 210 = 1024 up to 220 = 1048576 using pretrained models of the same context length. Random guessing is 20%.
Params Accuracy (%) at Sequence Length 210 212 214 216 218 220 28.04 31.47 28.43 27.50 41.17 27.66 42.22 40.72 31.10 42.41 7M 30.00 29.01 31.48 43.73 56.60
Remark E.1. We also note that the schedule was not tuned, and we never experimented with turning off sequence length warmup for these pretraining experiments. We later found that SLW did not help noticeably for audio pretraining at similar lengths (Section 4.4), and it is possible that it is not necessary for DNA pretraining either.
# E.3.4 Species (Great Apes) Classification | 2312.00752#148 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 149 | # E.3.4 Species (Great Apes) Classification
Models are causal and therefore only the last element (across the sequence length) of the modelâs output is used for the classiï¬cation head. Note that we control for the total number of elements in the loss function per gradient step. The pretraining objective includes all positions across the sequence length, so that ððððð_ððð£ðÃðððððððð_ðððððð is held constant; in other words, the batch size decreases as the sequence length increases. However, for a classiï¬cation task, since only the last position enters the loss, the batch size itself is held constant. Note that this also means that ï¬ne-tuning models with longer sequence lengths is more computationally expensive. | 2312.00752#149 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 150 | Training consists of 10 epochs, each of which has 1024 gradient steps. Each gradient step uses batch size 64, which are all independently randomly drawn by uniformly picking a species, uniformly picking a chromosome, and then uniformly picking a contiguous segment of DNA. Following (Nguyen, Poli, et al. 2023), models with a maximum context length greater than 214 = 16384 use sequence length warmup with 1 epoch at length 214 = 16384, 1 epoch at length 215 = 32768, 1 epoch at length 216 = 65536, and so on up to the maximum sequence length. For example, the model with 220 = 1048576 context undergoes 6 epochs of sequence length warmup before 4 more epochs at its maximum sequence length. | 2312.00752#150 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 151 | The learning rate for all Hyena models is ðºð â ð», while the learning rate for all Mamba models is ð·ð â ðº. These were found by performing learning rate sweeps for each model among {1ð â 5, 2ð â 5, 4ð â 5, 1ð â 4, 2ð â 4} for the smaller sequence lengths (210, 212, 214, 216), and these values were consistently found to be the best for each model. An abridged learning rate sweep was done at length 218, which agreed with these values, and a single run at length 220 was performed (as described above, the computational cost of these experiments is proportional to the sequence length). The learning rate followed a cosine decay schedule with warmup with 5 epochs of linear warmup to the maximum learning rate, and 5 epochs of cosine decay down to 1ð â 6. The unusually long learning rate warmup schedule was chosen because the sequence length warmup was also long (e.g. comprising 6 out of 10 epochs for the model with context length 220); we did not experiment with this choice.
Results for the Species classiï¬cation task are in Table 13.
# E.4 Audio Details
# E.4.1 YouTubeMix Audio Pretraining | 2312.00752#151 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 152 | Results for the Species classiï¬cation task are in Table 13.
# E.4 Audio Details
# E.4.1 YouTubeMix Audio Pretraining
Model. We use a model with 3 blocks per stage (3 Ã 5 = 15 total Mamba blocks), pooling factor ð = 16, and outer dimension ð· = 64, for about 3.5M parameters.
Dataset. The data is mu-law encoded at 8 bits, so the model is modeling discrete tokens with a vocab size of 256.
The dataset consists of clips of up to 1 minute long, or length 960000, which is subsampled and divided into segments of any desired sequence length. Since the architecture involves two stages of pooling by a factor of 16,
34
Table 14: YouTubeMix length scaling sequence lengths and batch sizes.
468 Ã 2048 = 958464 234 Ã 2048 = 479232 117 Ã 2048 = 239616 59 Ã 2048 = 120832 30 Ã 2048 = 61440 15 Ã 2048 = 30720 8 Ã 2048 = 16384 4 Ã 2048 = 8192 1 2 4 8 16 32 64 128 958464 958464 958464 966656 983040 983040 1048576 1048576 | 2312.00752#152 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 153 | Audio Waveforms - SSM Parameterization aso ââ samp ââ Mamba (s6) = âsy = sSeaive B/C ° 1.40 4 ââ -selective A s ras | __Mamba-$4) B 1204 124 108 108 Sequence Length
Audio Waveforms - SSM Parameterization ââ Mamba ($6) 4 ââ +complex = Solestive a | (Mamba-S4) 1.35 1.304 1.254 108 108 Sequence Length
1.48 21404 . é ag
Figure 10: (Audio Pretraining (YouTubeMix) Ablations.) As a uniformly-sampled âcontinuousâ signal modality, audio wave- forms actually benefit from LTI models which have matching inductive bias. (Left) Homogenous models (all blocks have the same parameterization) (Right) Only the center U-Net blocks are ablated; the outer blocks are Mamba-S4. Purple line is same as figure on left.
and we want the resulting sequence length to be a a multiple of 8 for hardware eï¬ciency, the longest possible sequence is 468 à 2048 = 958464. The rest of our sequence lengths are deï¬ned by successively halving this and rounding up to the nearest multiple of 2048. | 2312.00752#153 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 154 | Table 14 lists the speciï¬cations used in Figure 7. Beyond the varying batch sizes, the number of valid segments in the training set varied between diï¬erent sequence lengths (e.g. the number of training steps per epoch was not constant for diï¬erent points in the graph), which may have contributed to kinks in the scaling curves.
Training. Models were trained for 200ð¾ training steps with a maximum learning rate of 0.002, 20ð¾ (10%) warmup steps, and weight decay 0.1 (similar to our general pretraining recipe across domains).
Additional Ablations: SSM Parameterizations. We investigate SSM parameterizations on long-form audio waveform pretraining in the setting of Figure 7. The setting is modiï¬ed slightly to use larger models (8 layers and ð· = 64 for 6M params, the SaShiMi default), shorter sequences (211 = 2048 to 218 = 262144 instead of 213 to 220), lower LR (0.001 from 0.002), and shorter training cycles (100K instead of 200K steps). | 2312.00752#154 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 155 | Figure 10 shows that the change from S4 â S6 (i.e. the selection mechanism) is not always beneï¬cial. On long-form audio waveforms, it in fact signiï¬cantly hampers performance, which may be intuitive from the point of view that audio is uniformly sampled and very smooth, and therefore beneï¬ts from continuous linear time-invariant (LTI) methods. After ablating away the selection mechanism, note that the resulting model is the S4 layer inside the Mamba block. To disambiguate, we call this Mamba-S4 as opposed the default Mamba architecture Mamba-S6.
However, on the right side, we keep the outer layers of the U-Net Mamba-S4 and ablate only the inner layers. The performance diï¬erences shrink dramatically; this reinforces the hypothesis that layers closer to the raw audio signal should be LTI, but once they are âtokenizedâ and compressed by the outer layers, the inner layers no longer need to be LTI. In this setting however, the real-valued SSM still underperforms the complex-valued one.
35
# E.4.2 SC09 Speech Generation
Autoregressive training largely followed the autoregressive language modeling protocol, such as
⢠Weight decay 0.1
⢠Learning rate warmup for 10% of total steps | 2312.00752#155 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 156 | Autoregressive training largely followed the autoregressive language modeling protocol, such as
⢠Weight decay 0.1
⢠Learning rate warmup for 10% of total steps
⢠AdamW optimizer with ð½ = (0.9, 0.95)
⢠Gradient clip value 0.1
We used a learning rate of 0.002 and 200000 training steps at a batch size of 16.
The large Mamba model in Table 4 has 15 layers per stage with an outer dimension of ð· = 96 and pooling factor 4. We note that this dataset is small (training went through 100 epochs) and for this large model, there was signiï¬cant overï¬tting of the BPB or NLL. However, automated metrics of generated samples continually improving throughout training.
The models in the architecture ablations in Table 5 all have 8 layers per stage with an outer dimension of ð³ = 64 and pooling factor 4. The S4+MLP block has roughly 2ð·2 + 4ð·2 parameters (expansion factor 2 in the MLP). The Transformer block has 4ð·2 + 2ð·2 parameters (expansion factor 1 in the MLP). The Mamba block has the usual â 6ð·2 parameters. All models have roughly 6M total parameters.
# E.5 Efficiency Benchmark | 2312.00752#156 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 157 | # E.5 Efficiency Benchmark
Scan Operation. We compare the core operation of selective SSMs, which is the parallel scan (Section 3.3), against convolution and attention, measured on an A100 80GB PCIe GPU. Note that these do not include the cost of other operations outside of this core operation, such as computing the convolutional kernel in global-convolution models, or computing the QKV projections in attention.
As a baseline, we implement a standard parallel scan in PyTorch with no kernel fusion. This requires materializing the parameters A, B, C in HBM.
Our scan implementation fuses the discretization step and the parallel scan, avoiding the cost of materializing all the large parameters in HBM.
For convolution, we use the standard implementation in PyTorch, which separately performs FFTs on the inputs and the ï¬lters, multiply them in frequency domain, then performs an inverse FFT to obtain the result. The theoretical complexity is ð(ð¿ log(ð¿)) for sequence length ð¿. | 2312.00752#157 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 158 | For attention, we compare against the fastest implementation that we are aware of (FlashAttention-2 (Dao 2023)), with causal mask. Note that FlashAttention-2 with causal mask is about 1.7Ã faster than without causal mask, since approximately only half of the attention entries are computed. We use batch size of 1 and increase the sequence length from 29 = 512, 210 â 1ð¾, 211 â 2ð¾, up to 219 â 500ð¾ (some of the baselines run out of memory before reaching 500K). We use a model dimension of ð· = 1024 and state dimension ð = 16. We measure with BF16 inputs, which is the data type most commonly used for large scale training.
End-to-end Inference. We measure the inference throughput of a Mamba 1.4B model and an untrained Mamba 6.9B model, against a standard Transformer (GPT3 architecture) at 1.3B and 6.7B size. We use the standard Transformer implementation in the Huggingface transformers library. | 2312.00752#158 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 159 | We set the prompt length to be 2048 and the generation length to be 128. We vary the batch size from 1, 2, 4, 8, 16, 32, 64, to 128, and measure time time taken to generate 128 tokens. We then calculate the throughput (tokens/s) as batch size à 128âtime taken. We repeat the measurements 3 times and take the average. Measurements are done on an A100 80GB PCIe GPU.
Memory Benchmark. The memory usage simply scales proportionally to the size of the activation tensors, as with most deep sequence models. We report measurements of the training memory requirements of 125M models
36
Table 15: (Memory benchmark.) Mambaâs memory footprint is comparable to the most optimized Transformer. Results for 125M models.
Batch size Transformer (w/ FlashAttention-2) Mamba 1 2 4 8 16 32 4.6GB 5.2GB 6.9GB 11.5GB 20.7GB 34.5GB 4.8GB 5.8GB 7.3GB 12.3GB 23.1GB 38.2GB | 2312.00752#159 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 160 | on 1 A100 80GB GPU. Each batch consists of sequences of length 2048. We compare to the most memory-eï¬cient Transformer implementation we are aware of (with kernel fusion from torch.compile and with FlashAttention-2). Table 15 shows that Mambaâs memory requirement is comparable to a similar-sized Transformer with an extremely optimized implementation, and we expect further improvement in Mambaâs memory footprint in the future.
37 | 2312.00752#160 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2311.15296 | 0 | 3 2 0 2
v o N 6 2 ] L C . s c [ 1 v 6 9 2 5 1 . 1 1 3 2 : v i X r a
# UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Xun Liang*, Shichao Song*, Simin Niu*, Zhiyu Lit, Feiyu Xiong", Bo Tang", Zhaohui wy', Dawei He!, Peng Cheng', Zhonghao Wang", Haiying Deng? *School of Information, Renmin University of China, Beijing, China TInstitute for Advanced Algorithms Research, Shanghai, China tState Key Laboratory of Media Convergence Production Technology and Systems, Beijing, China Email: {xliangs, songshichao, niusimin}@ruc.edu.cn, {lizy, xiongfy, tangb} @iaar.ac.cn {hedawei, chengpeng, wangzhonghao, denghaiying} @xinhua.org | 2311.15296#0 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 1 | AbstractâLarge language models (LLMs) have emerged as pivotal contributors in contemporary natural language processing and are increasingly being applied across a diverse range of in- dustries. However, these large-scale probabilistic statistical mod- els cannot currently ensure the requisite quality in professional content generation. These models often produce âhallucinatedâ text, compromising their practical utility in professional contexts. To assess the authentic reliability of LLMs in text generation, numerous initiatives have developed benchmark evaluations for hallucination phenomena. Nevertheless, these benchmarks fre- quently utilize constrained generation techniques due to cost and temporal constraints. These techniques encompass the use of directed hallucination induction and strategies that deliberately alter authentic text to produce hallucinations. These approaches are not congruent with the unrestricted text generation demanded by real-world applications. Furthermore, a well-established Chinese-language dataset dedicated to the evaluation of hallu- cinations in text generation is presently lacking. Consequently, we have developed an Unconstrained Hallucination Generation Evaluation (UHGEval) benchmark, designed to compile outputs produced with minimal restrictions by LLMs. Concurrently, we have established a comprehensive benchmark evaluation framework to aid subsequent researchers in undertaking scalable and reproducible experiments. We have also executed extensive experiments, evaluating prominent Chinese language models and the GPT series models to derive professional performance insights regarding hallucination challenges. | 2311.15296#1 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 2 | Organization The MOHEin SouthKerea Korea Aerospace hallucinated !ndustries stated that the South Korean government id=doc_00372¢ will continue to advance this export plan.
Statistics hallucinated id=r C
During the holiday, the national highway passenger traffic reached 258 310 million person-times, representing a year-on-year increase of 8-9% 3.2%.
Knowledge hallucinated id=kno_0004
_ Sickle cell disease is a severe hereditary blood disorder that can lead to athereseleresis anemia, infarction, and other complications.
Timeline China National Arts Fund was officially established in hallucinated 2942 2013 with the aim of supporting artistic creation id=ger and the cultivation of artistic talent nationwide.
Fig. 1. Real-world hallucination examples from UHGEval. Using the IDs, you can locate the corresponding original Chinese news articles within our dataset. Note: MOTIE denotes Ministry of Trade, Industry, and Energy.
However, LLMs invariably manifest hallucinations [2]. Hal- lucination is characterized by generated content that is in- congruent with user input, the modelâs own output context, or factual information. Real-world examples of hallucination from our UHGEval dataset can be observed in Fig. 1.
Index Termsâlarge language models, llms, hallucination, benchmark, unconstrained generation
# I. INTRODUCTION | 2311.15296#2 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 3 | Index Termsâlarge language models, llms, hallucination, benchmark, unconstrained generation
# I. INTRODUCTION
the With the proliferation of extensive textual corpora, advent of high-performance GPUs, and the refinement of advanced deep learning paradigms, Large language models (LLMs) have exhibited unparalleled proficiency in a mul- titude of natural language processing (NLP) tasks, includ- ing language generation, knowledge application, and intricate reasoning. Concurrently, noteworthy advancements have been realized in the domains of human alignment, engagement with external environments, and the manipulation of tools [1]. | 2311.15296#3 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 4 | Owing to reliability concerns, these circumstances markedly hinder the practical deployment of LLMs. Furthermore, in specialized domains like medicine, law, finance, and jour- nalism, hallucination presents a significant to deployment [3], [4]. These fields require stringent standards of content timeliness, accuracy, and logical consistency, at- tributable to their dynamic and highly specialized character- istics. During the training data collection phase, LLMs may exhibit a deficiency in domain-specific knowledge, yielding outdated content. In the pre-training phase, constraints in model parameters or training methodologies may engender parameter inaccuracies, thwarting the retrieval of accurate content. During the supervised fine-tuning phase, incongruent datasets might yield excessively positive incorrect responses. In the inference phase, the absence of a rollback mechanism can precipitate a cumulative escalation of hallucinations, unThe authors contribute equally. © Corresponding author. | 2311.15296#4 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 5 | Data Collection and Pre-processing Beginning Text Original News EMM 2015E7A2A SA, RAIGRREMAB | (NASA) AFR, SHRMTESW SWRA | FIMTRANâ4520, RIGHT HARKS. Following Text ' WEB, FSMâA5 2H LE MIRACO%, PE RMEREY | 1400, FABER, KHMAOPREBAN, AE | SAIAEIBM, FERHITECOIZS, Reference Information Hl MCERESRSS ATS, ARNT FRORAMALL b, MABSOPERARRSRESHRA MD, FSH 452ES5HRABETIRNES, RIAEZLERAE HA, ADARRM MEM BARMF HRW? âLLMs ' @ chatcum Metrics ' | ome furu rove [vrei ' Omazs ' S aXtia Evaluators ' ' ChatGPT @ Generative @ Discriminative @ Selective | | Qe â- B âAutomated Evaluation Reference Check UHGEval Unconstrained Hallucination Generation Hallucination Ranking ' || Chinese LLM Engine Hallucination Candidate (5) One rreicns RENNMZOSETA, HYRLSRLTS TEE AME RAMEE BAOTERE. Hallucination Candidate | 2311.15296#5 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 6 | Engine Hallucination Candidate (5) One rreicns RENNMZOSETA, HYRLSRLTS TEE AME RAMEE BAOTERE. Hallucination Candidate (4) Hallucination gill Candidate (2) BRMILTSUNERMARMO BRE, FETA ORNGRESRERNTANEH. (Qwen-148 Sata FMA FIERNOOL EMME, RAM SHIRAM, AE AMES, DARN NE INE tSRES RZ ChatGLM2-68 HRIRNASARIINE, FFE) â452b 5 HERR 2 NEF ES He AISNE, ESHWAARMNIIREAE ER, XinYu-7B TAAL SORA ANBRALSI6, FERABRAOO}EE, FRAN, FURMNONSOOR, ME" ERE, âCheck item 1 | Check item 2 MEEEE Check Item N Final Datasets i] if ii i if Hallucination Candidate (1) if if if i] i] i | L | Human Re-Check (Max Voting) Ground Truth q Ess. | a af a a | Automatic Labeling And Human Recheck | 2311.15296#6 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 7 | Fig. 2. The process of creating UHGEval. Steps 1 to 4 regarding the creation of the benchmark dataset are explained in Section II; Step 5, concerning the evaluation framework, is detailed in Section III.
dermining the logical integrity of responses [5]. For example, erroneous medical guidance, imprecise legal stipulations, and fabricated journalistic narratives substantially restrict the prac- tical utility of LLMs in real-world contexts [3]. The fabricated news content depicted in Fig. 1 offers NO utility to journalists; on the contrary, the verification and rectification of such content exacts a toll on the valuable time of journalists. | 2311.15296#7 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 8 | Achieving professional-level generation necessitates con- fronting the significant challenge of devising novel training methodologies and model architectures. However, prior to these developments, it is crucial to formulate a comprehensive, stringent, and demanding benchmark for the assessment of hallucination in language generation [5], [3]. Without such a benchmark, conducting a comparative evaluation of efforts aimed at controlling hallucination would prove to be arduous. While there have been initiatives to develop benchmarks for hallucination assessment, the majority of these methods employ restricted techniques to produce particular kinds of hallucinated utterances. This approach to generation is at odds with real-world scenarios where hallucinations may arise in unrestricted, spontaneously generated content. For example, HaluEval specifies the type of hallucination in the prompt when generating hallucinated text: âYou are trying to answer a question but misunderstand the question context and inten- tionâ [6]. Additionally, benchmarks such as HADES annotate hallucinations at a finer granularity by generating token-level hallucinations based on text perturbations [7], but the text per- turbation method is still constrained. Ultimately, the majority of benchmarks are centered on the evaluation of hallucinations in English, neglecting the assessment of such phenomena in Chinese. The extensive lexicon of Chinese characters,
combined with the complexities introduced by Chinese word segmentation, renders the Chinese hallucination evaluation particularly arduous and deserving of focused scrutiny. | 2311.15296#8 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 9 | combined with the complexities introduced by Chinese word segmentation, renders the Chinese hallucination evaluation particularly arduous and deserving of focused scrutiny.
To address the aforementioned challenges, we introduce a novel benchmark for hallucination assessment, as depicted in Fig. 2. The benchmark dataset is comprised of news articles. Selecting texts from this domain is intentional, given that news requires utmost precision in conveying factual information and exhibits minimal tolerance for hallucinations. Constructing an evaluation dataset within this sphere presents a considerable challenge for the majority of LLMs. Concurrently, news arti- cles are of exceptional quality, readily available, and frequently employed as training corpora by a large number of LLMs, guaranteeing impartiality in the evaluation of many LLMs [1]. In light of these factors, we collected a considerable volume of raw news articles, established an efficient, professional-grade hallucination assessment dataset, and formulated an evaluation framework named UHGEval. It is significant to note that our dataset was produced in an entirely unconstrained fashion. We permit models to compose freely and subsequently sift through the content to identify hallucinations. | 2311.15296#9 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 10 | Our contributions are as follows: (1) The development of an unconstrained hallucination evaluation dataset. Existing meth- ods for constructing datasets often yield biases towards prede- fined directions, thereby hindering the full simulation of real- world hallucinations. We have created a hallucination evalu- ation dataset comprising over 5000 items, generated without intervention, closely mirroring real-world scenarios. (2) The establishment of a unified and diverse evaluation framework. Current benchmark methods for hallucination evaluation often exhibit a singular approach and lack task specificity. We have
developed UHGEval, a unified, flexible, and robust evaluation framework that encompasses generative, discriminative, and selective modalities, along with sentence-level and keyword- level granularity. (3) A comprehensive empirical analysis. We conducted detailed experiments with the proposed benchmark on eight prominent Chinese LLMs and three classic GPT series models to explore the credibility of various LLMs. The aforementioned dataset, evaluation framework, and empirical results collectively constitute the UHGEval benchmark, which is openly available on Github1.
# II. THE UHGEVAL BENCHMARK DATASET
A. Data Collection and Pre-processing | 2311.15296#10 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 11 | # II. THE UHGEVAL BENCHMARK DATASET
A. Data Collection and Pre-processing
the news continuation dataset, we amassed tens of thousands of historical news articles from leading Chinese news websites, covering the period from January 2015 to January 2017, to serve as the foundation for constructing the dataset. It is worth noting that the decision to eschew the inclusion of more recent news articles (e.g., from 2023) was made to better assess the modelâs understanding of existing knowledge and past news events. Indeed, the knowledge embedded within the training data of existing Chinese LLMs typically encompasses information pertaining to significant news between 2015 and 2017 [1]. | 2311.15296#11 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 12 | Considering the different categories of news, such as sports, education, science, and society, the generated hallucinations typically exhibit certain differences. Therefore, when curating the initial news collection for continuation, we endeavored to ensure that the distribution of the collection aligns with the original distribution by randomly sampling from the entire news dataset. Furthermore, we have categorized the collected news examples into four major types: document-intensive, number-intensive, knowledge-intensive, and general news, as shown in Table I. We hypothesize that the likelihood of gen- erating hallucinations varies for different types of news. For example, number-intensive news frequently contains various numerical data, such as years, scores, and values, which may predispose the model to fabricating numbers or introducing minor deviations. Document-intensive news, on the other hand, primarily references official documents, such as factual policy documents, official statements, standard explanations, and legal clauses. In this case, the model may be inclined to fabricate specific policy or document names, or create detailed but fictional policy content. Knowledge-intensive news is characterized by an emphasis on enduring truths and analytical reasoning, which can render the model susceptible to flawed reasoning or the retrieval of incorrect facts. In addition to these three types, we also categorize culturally relevant general news as a separate category for experimental control. | 2311.15296#12 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 13 | In the data pre-processing stage, we divide a complete news article into three parts: the beginning text, the following text, and the reference information. The beginning text serves to guide the model in generating the continuation and is typically the opening portion of the news. During evaluation, the LLM
# 1https://github.com/IAAR-Shanghai/UHGEval
TABLE I STATISTICS OF COLLECTED NEWS
Type Categories Proportion DOC Politics, Law, Military, Education 27.52% NUM Sports, Economy, Market KNO Science, Technology, Healthcare Society, Culture, Arts, Entertainment, Weather, Protection, Environmental Disasters, Accidents GEN 43.34% 6.55% 22.59%
Note: In the table, DOC denotes document-intensive news; KNO de- motes knowledge-intensive news; NUM denotes number-intensive news; GEN denotes general news. The same as below. | 2311.15296#13 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 14 | is required to generate content following the beginning text. The following text comprises the subsequent sentences in the news article and serves as the ground truth for the continuation task. Finally, all the remaining text, after the beginning text is excluded, serves as a source of reference information. This section provides reference information for labeling and also acts as the reference text for the reference-based evaluation. Filtering Settings. To ensure the overall quality of the final evaluation dataset, we have implemented the following filters: We consider only the categories listed in Table I, which correspond to the most frequently occurring categories in the original news collection. For news length, we set parameters such that the body length of the selected news falls between 630 and 870 characters, while the beginning text spans between 80 and 120 characters and consists of 2 to 5 sentences. These length parameters reflect the average values in the original news collection and were chosen to avoid overburdening the annotation process at a later stage.
B. Unconstrained Hallucination Generation | 2311.15296#14 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 15 | Historically, benchmarks for evaluating hallucination have predominantly relied on a single LLM to produce hallucinated dataset. Notable examples include HaluEval [6] and PHD [8], which exclusively utilize ChatGPT, and FActScore [9] and FACTOR [10], which solely employ InstructGPT [11]. In contrast, our methodology incorporates a suite of five distinct Chinese LLMs to generate hallucinated content. These mod- els include ChatGLM2-6B [12], Baichuan2-13B [13], Qwen- 14B [14], InternLM-20B [15], and the Xinyu series model, Xinyu-7B. Xinyu-7B is an augmented large-scale language model derived from the foundational BloomZ-7B [16] through continued pre-training, news-specific fine-tuning, and align- ment optimization. Furthermore, Xinyu2-70B is developed based on the open-source LLaMA2-70B [17] framework, incorporating expansions to the Chinese lexicon, ongoing pre- training, and news-specific fine-tuning, thereby endowing it with a robust foundational capability in the news domain. The Xinyu series models are the | 2311.15296#15 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 16 | pre- training, and news-specific fine-tuning, thereby endowing it with a robust foundational capability in the news domain. The Xinyu series models are the results of a collaborative research and development effort between the Institute for Advanced Algorithms Research, Shanghai (IAAR, SH), and the State Key Laboratory of Media Convergence Production Technology and Systems of the Xinhua News Agency. Xinyu-7B and Xinyu2-70B will also be utilized in the experiment phase. | 2311.15296#16 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 17 | Our approach engenders a more heterogeneous generation of hallucinations, mitigating the bias that may arise from the use of a single model and promoting equity within the dataset. This is due to the varying architectures and training corpora inherent to different LLMs. Furthermore, we have adopted an unconstrained generation methodology for the continuation of natural language content. This entails directly inputting the text to be continued into the model without any restrictive prompt thereby obtaining organic results. For each input example, we concurrently generate five candidate continuations. To maintain consistency across all models, we employ uniform parameter settings, with a temperature coefficient set at 1.0 and max new tokens limited to 1024.
# C. Hallucination Ranking
Given the unconstrained nature of our generation paradigm, the task of discerning whether the generated content is indeed hallucinated presents a significant challenge. Upon generating the continuations, a straightforward reliance on human verifi- cation is infeasible. An exclusive dependence on human anno- tation would incur substantial costs and may not be sustainable at scale, whereas a purely machine-based approach, such as utilizing GPT4, could potentially yield less accurate results. | 2311.15296#17 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 18 | To navigate these complexities, we have adopted a two- stage annotation methodology. This approach begins with an initial phase of hallucination ranking, which is designed to preliminarily sort the generated content based on the like- lihood of hallucination. The ranking is then followed by a combination of automatic labeling and human recheck. The integration of hallucination ranking and machine labeling serves a pivotal role in streamlining the subsequent human verification process. This hybrid approach aims to enhance the efficiency and accuracy of human checks, effectively bridging the gap between the scalability of automated processes and the critical discernment of human judgment.
Hallucination ranking is a crucial step in the process of evaluating and selecting the most appropriate continuation from a set of candidate continuations generated by LLMs. The objective of this step is to identify a continuation that not only demonstrates high quality in terms of coherence and readability but also includes an appropriate level of hallucination â misinformation or fabrications that are not supported by the input or real-world knowledge.
To strike this balance, the selection process takes into account two primary dimensions: | 2311.15296#18 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 19 | To strike this balance, the selection process takes into account two primary dimensions:
Fluency. This refers to the naturalness and readability of the text. A fluent text should read smoothly, be grammatically cor- rect, and make logical sense in the context of the continuation. To assess fluency, a reward model developed by the Institute for Advanced Algorithms Research (IAAR) is employed. This model is trained to evaluate the quality of text and can assign scores to each continuation based on its fluency. By using this model, the top three continuations that exhibit the highest fluency are retained for further consideration.
Likelihood of Hallucination Occurrence. This dimension evaluates the extent to which the continuation may contain
BLEU-4 THe eee eat Hele wyaiale,. rouse. SIH tee eat Aas wiolelele( were (THR HRe ee alee vine ale â. Jiangsu i inChina for green food production the'mast developed provinces one of
Fig. 3. Tokenization results for BLEU-4, ROUGE-L, and kwPrec, using newsid=num 000432 as an example. The meaning of the above sentence is: Jiangsu is one of the most developed provinces in China for green food production. Note: We ignore tokens that cause overlap. | 2311.15296#19 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 20 | hallucinated content. For hallucination occurrence likelihood ranking, we evaluate the lexical correlation between the gener- ated continuation and the reference information. The lower the correlation, the more likely hallucinations are to occur. Despite existing lexical metrics based on n-gram coverage, such as BLEU [18] and ROUGE [19], we believe that these rule-based methods may not effectively discover hallucinated keywords. Therefore, we propose the keyword precision (kwPrec) metric. This approach initially uses an LLM (here, we use GPT3.5- Turbo) to extract keywords from the continuation and deter- mine whether these keywords have a match in the reference information. The ratio of all matches to the total keywords is then calculated. Since LLMs often extract appropriate keywords more effectively, kwPrec focuses more on factual relevance rather than expressional relevance. Fig. 3 illustrates the tokens segmented by our method compared to those obtained by BLEU-4 and ROUGE-L.
After implementing this method, we calculate the kwPrec for each of the three candidate continuations, selecting the one with the lowest value as the final candidate. Through the screening in these two stages, we can ensure that, in the worst case scenario, the final candidate continuation ranks third in fluency and third in the likelihood of hallucination occurrence, achieving a balanced level. | 2311.15296#20 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 21 | By considering both fluency and the likelihood of hallucina- tion, the process aims to filter out continuations that are either too nonsensical or too conservative (lacking any hallucinated content). The ideal candidate continuation would be one that is coherent and engaging but also contains a detectable level of hallucination, which can then be used for further analysis, such as studying the modelâs tendencies to hallucinate or for training systems to detect and mitigate such hallucinations.
The final candidate continuations will undergo further anno- tation to determine the presence and degree of hallucination, which can involve additional automated tools and human judgment. This multi-faceted approach helps ensure that the final selected continuation is both high-quality and relevant for the purposes of the hallucination evaluation benchmark.
D. Automatic Labeling And Human Recheck
Through the application of hallucination ranking, we can identify continuations that are both articulately expressed and likely to contain hallucinations. To detect continuations with confirmed hallucinations, we propose an annotation scheme
PrecedingSentence: 20144, # PISA 0605 ASE HCA FB855 5 -F A, SSPRMBASWODZâ. BA A, D#xt2055 FH. | 2311.15296#21 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 22 | LLM Generation HARIt, 2014 PRHAKABBIAT TABOICTAM,) IIS kit 200%. Label Hallucination Elements Extraction Rit - SB aa AE S} Re-check By Human he - S HRA 1301ZF BLT aes MA250(2F RAT Ss (Automatic Checking By GPT-4) SUA RB A2507F A, easicen 5 RMS AEHRARAA THA WAR RIBAA EMA, 2014 #, EKA WEALMRPOUNAPRRAD, SEHRKRBRIt fMIMAB2805H FA, lItiix60%, Hh, s¢{APBihe338 5+ BR, DARHKE7TAF ER. Reference Check
Fig. 4. The process of automatic labeling and human recheck.
that utilizes keywords, which includes automatic labeling and subsequent human verification, as shown in Fig. 4. | 2311.15296#22 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 23 | Fig. 4. The process of automatic labeling and human recheck.
that utilizes keywords, which includes automatic labeling and subsequent human verification, as shown in Fig. 4.
Automatic labeling. We utilize the keywords identified by GPT3.5-Turbo from the candidate continuations, similarly to the process used in the computation of kwPrec previously. These keywords act as the focal points for subsequent veri- fication. Thereafter, we employ GPT4-0613 [20] to perform annotation on these keywords. GPT4-0613 evaluates the va- lidity of the keywords in the continuations by conducting a cross-reference with the provided original news and provides explanations for any detected unreasonable keywords. | 2311.15296#23 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 24 | Human recheck. We undertake a manual, one-to-one ver- ification process by analyzing the annotated results and ex- planations provided by GPT4-0613 against the original news. This step is implemented to ensure the accuracy of the machine-generated annotations. In the end, instances verified as accurate by annotators comprise the final UHGEval dataset. However, the keyword-based annotation scheme exhibits inherent limitations. Languages exhibit a dependency struc- ture among words [21]. For instance, in the phrase âThe rainbow is black,â the words ârainbowâ and âblackâ exhibit interdependence. One could contend that âblackâ is incorrect, while another could maintain that ârainbowâ is the erroneous term, given that ânightâ is typically described as black. To address the annotation challenges stemming from language dependency structures, we have adopted the Least Hallu- cination Principle. If a set of words can be selected, and their replacement with contextually appropriate words yields a semantically coherent sentence, then such a set of words is
# F
TABLE II DATASET BASIC STATICS | 2311.15296#24 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 25 | # F
TABLE II DATASET BASIC STATICS
DOC KNO NUM GEN #news avg. #hallu. kw. avg. #kw. #hallu. kw. / #kw. avg. len. contn. avg. len. begin. avg. len. refer. 1242 2.15 8.43 25.47% 24.61% 31.44% 26.00% 46.77 102.15 634.17 320 1.99 8.09 2431 2.54 8.07 1148 2.12 8.17 48.36 102.66 618.90 44.47 103.20 624.47 45.97 102.86 632.47
Note: In the table, # denotes quantity, avg. denotes average, len. denotes length, contn. denotes hallucinated continuations, begin. denotes news beginnings, and refer. denotes reference information. The same as below.
designated as a hallucinated word group. The words selected for annotation must meet the condition of comprising the minimal number of words in the group, as illustrated in Equation 1. In the equation, W is the set of keywords in a sentence, w is the hallucinated word group, correct(·) is the correction function that modifies hallucinated words to non-hallucinated words, and hallucinated(·) assesses whether a sentence composed of a set of keywords hallucinated. | 2311.15296#25 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 26 | min |w| s.t. w â W wâ² = correct(w) false = hallucinated(W â w + wâ²) (1)
In accordance with this principle, within the phrase âJourney to the West is an American novel and one of the Four Great Classics,â the word âAmericanâ would be marked for annotation, as altering this single keyword to âChineseâ dispels the hallucination throughout the sentence.
Additionally, we acknowledge that the task of hallucination annotation may become somewhat tedious. Consequently, an- notators are integrated throughout the entire process, partici- pating in discussions instead of solely evaluating the accuracy of machine annotations. This approach also yields benefits for our work. For example, an annotator with a journalism back- ground offered valuable professional insights into pinpointing news-related hallucinations, emphasizing that fact increment is a critical aspect of news writing.
# E. Data Statics | 2311.15296#26 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 27 | # E. Data Statics
Starting with 17,714 candidate hallucinated continuations, we curated a dataset of 5,141 hallucinated continuations, as detailed in the basic statistics in Table II. Additionally, we developed a conversion rate chart to depict the transition from candidate hallucinations to the final dataset, as depicted in Fig. 5. The conversion rate can be interpreted as the likelihood of hallucinations occurring across various categories. Our observations indicate a higher likelihood of hallucinations in number-intensive and general news, whereas this likelihood is reduced in knowledge-intensive and document-intensive news.
7637 4904 S Document-Intensive Bg OG 3s 3889 SS General-News Cary, 6) 1148 (29.52%) 41194 BBS Knowodae-Intonsve gan. aeanny Number-Intensive Total Candidates 17714 & =
Fig. 5. Conversion rates from candidates to hallucinations. | 2311.15296#27 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 28 | Fig. 5. Conversion rates from candidates to hallucinations.
By analyzing the hallucinated word cloud depicted in Fig. 6 for each news category, we can draw the following conclu- sions: Number-intensive news often includes numeric values that are challenging to remember, like 0.09% and 6:3, which pose difficulties for both LLMs and humans. General news encompasses a diverse vocabulary, featuring terms such as âsocial mediaâ and âfriendship,â which are often deemed less critical and thus challenging to incorporate into the training corpora of many LLMs. Knowledge-intensive news frequently features terms such as âaccording to incomplete statisticsâ and âkey technology,â which are prevalent in technical literature. However, LLMs may not always use these terms appropriately. Document-intensive news often contains terms associated with official statements, such as ârepresentation,â âpresident,â and âspokesperson.â This suggests that LLMs are susceptible to introducing unauthorized alterations to the content documents.
Document-Intensive General nae Be. Number-Intensive Be Re Lond Knowledge-Intensive
Fig. 6. Word clouds of hallucinated keywords in different types of news
# III. EXPERIMENTS
# A. Models | 2311.15296#28 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |
2311.15296 | 29 | Fig. 6. Word clouds of hallucinated keywords in different types of news
# III. EXPERIMENTS
# A. Models
Given that our dataset is tailored for the Chinese language generation domain, we selected eight widely-used Chinese LLMs and three foundational models from OpenAI, as detailed in Table III. These include eight base models: GPT Base, GLM Base, BLOOMZ Base, InternLM Base, Baichuan2 Base, Qwen Base, Aquila2 Base, and LLaMA2 Base.
2https://openai.com/blog/new-models-and-developer-products-announced-at- devday
TABLE III MODELS SORTED BY RELEASE DATE | 2311.15296#29 | UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation | Large language models (LLMs) have emerged as pivotal contributors in
contemporary natural language processing and are increasingly being applied
across a diverse range of industries. However, these large-scale probabilistic
statistical models cannot currently ensure the requisite quality in
professional content generation. These models often produce hallucinated text,
compromising their practical utility in professional contexts. To assess the
authentic reliability of LLMs in text generation, numerous initiatives have
developed benchmark evaluations for hallucination phenomena. Nevertheless,
these benchmarks frequently utilize constrained generation techniques due to
cost and temporal constraints. These techniques encompass the use of directed
hallucination induction and strategies that deliberately alter authentic text
to produce hallucinations. These approaches are not congruent with the
unrestricted text generation demanded by real-world applications. Furthermore,
a well-established Chinese-language dataset dedicated to the evaluation of
hallucinations in text generation is presently lacking. Consequently, we have
developed an Unconstrained Hallucination Generation Evaluation (UHGEval)
benchmark, designed to compile outputs produced with minimal restrictions by
LLMs. Concurrently, we have established a comprehensive benchmark evaluation
framework to aid subsequent researchers in undertaking scalable and
reproducible experiments. We have also executed extensive experiments,
evaluating prominent Chinese language models and the GPT series models to
derive professional performance insights regarding hallucination challenges. | http://arxiv.org/pdf/2311.15296 | Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Cheng, Zhonghao Wang, Haiying Deng | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | [
{
"id": "2307.03109"
},
{
"id": "2308.11764"
},
{
"id": "2305.14251"
},
{
"id": "2302.04166"
},
{
"id": "2107.02137"
},
{
"id": "2305.11747"
},
{
"id": "2307.06908"
},
{
"id": "2309.16609"
},
{
"id": "2310.03368"
},
{
"id": "2211.01786"
},
{
"id": "2307.09288"
},
{
"id": "2303.08774"
},
{
"id": "2306.09296"
},
{
"id": "2303.18223"
},
{
"id": "2309.05922"
},
{
"id": "2306.05087"
},
{
"id": "2307.15343"
},
{
"id": "2309.10305"
},
{
"id": "2310.04988"
},
{
"id": "2307.03987"
},
{
"id": "2310.07521"
},
{
"id": "2309.01219"
},
{
"id": "2310.06498"
},
{
"id": "2309.16583"
}
] |