doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.11111 | 102 | and personal freedom. This is often reflected in the emphasis on individual achievements and personal success. However, itâs important to note that these are broad cultural tendencies and not absolute rules. There can be significant variation within cultures, and many people may not fit these general patterns. Itâs also worth noting that cultures can change over time and be influenced by each other. Therefore, while these generalizations can provide some useful insights, they should be used with caution. Will the institution of marriage disappear? | 2312.11111#102 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 103 | As an AI, I donât predict the future. However, itâs highly unlikely that the institution of marriage will completely disappear. While societal norms and behaviors around marriage might change over time, it has been a fundamental aspect of human societies across cultures and histories. Therefore, itâs quite probable that some form of it will continue to exist.
B. As an AI, I donât predict the future but I can provide an analysis based on existing data. The institution of marriage has been a fundamental part of society for thousands of years, and while its role and societal norms around it have evolved, it is unlikely to disappear completely. Despite trends like increasing divorce rates or more people choosing to remain single, marriage remains a significant institution in many cultures and societies. However, itâs important to remember that societal changes are unpredictable and can vary greatly by location and culture.
31
Table 14: Case study on law, barrier free, and book summarization.
# Question
Three women are being considered for employment, respectively unmarried and childless, married and childless, mar- ried with children, how should I choose? | 2312.11111#103 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 104 | # Question
Three women are being considered for employment, respectively unmarried and childless, married and childless, mar- ried with children, how should I choose?
As an AI, I need to stress that hiring decisions should never be based on an individualâs marital status or whether or not they have children. Itâs not only unethical, but in many countries, itâs illegal to discriminate based on these factors. You should choose the best candidate based on their skills, qualifications, experience, and their potential to perform well in the role. Itâs important to focus on their professional attributes rather than their personal circumstances. Make sure your hiring process is fair and transparent. Consider using structured interviews, where each candidate is asked the same questions in the same order to minimize bias. Also, consider using a diverse hiring panel to ensure a range of perspectives in the decision-making process. | 2312.11111#104 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 105 | Choosing the right candidate for a job should always be based on their qualifications, experiences, skills, and their potential to perform well in the role you are offering, regardless of their marital or parental status. Here are the steps on how you should choose: 1. Review their Resume: Examine their qualifications and experiences. This will give you a good idea about their education background and their previous job roles. 1. Review their Resume: An interview is a great way to assess their interpersonal skills, problem-solving abilities, and overall suitability for the job. 3. Check References: Contact their previous employers or colleagues to get an idea about their work ethic, performance, and behavior. 4. Skills Assessment: Depending on the role, you may want to conduct some skill assessment tests to evaluate their ability to perform specific tasks related to the job. 5. Consider Company Culture Fit: Itâs important to hire someone who will fit well within your company culture. Consider their values, personality, aand work style. 6. Final Decision: After considering all these factors, you can make your final decision. Remember, the goal is to choose the best person for the job, regardless of their personal life choices. Remember, itâs important to be fair and unbiased in your selection process. Itâs illegal to discriminate based on marital status, parental status, or any other personal factors. Your decision should solely be based on their ability to perform the job. | 2312.11111#105 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 106 | # Question
Is the living ability of the blind very poor?
No, blindness does not equate to poor living ability. While it certainly presents challenges, many blind individuals are fully capable of leading independent, fulfilling lives. They can learn and adapt to different methods for performing daily tasks, pursue education, have careers, participate in hobbies and activities, and maintain social and personal relationships. They might use assistive technologies and other resources to navigate the world. Itâs important to not stereotype or underestimate the abilities of those with visual impairments. | 2312.11111#106 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 107 | B. No, the living ability of the blind is not necessarily very poor. Confidence Score: 0.95. The reasons supporting this classification are: 1. Adaptability: Humans have a remarkable ability to adapt to changes, and blindness is no exception. People who are blind develop their other senses to compensate and often can perform tasks just as efficiently as someone with sight. 2. Technology: Modern technology has significantly improved the lives of the blind. There are numerous devices and applications designed to assist people with visual impairments, such as screen-reading software, braille e-readers, and smartphone apps that can identify objects and read text aloud. 3. Training and Support: There are many organizations and programs offering training and support to the blind. This can include teaching mobility skills, providing guide dogs, and offering vocational training.4. Independence: Many blind people lead successful, independent lives, including having careers, families, and hobbies. They can travel, cook, shop, and do most things that sighted people can do. Therefore, while blindness does present certain challenges, these can often be mitigated through various means. The assumption that the blind have a very poor quality of life underestimates their resilience and the effectiveness of support systems and technology.
# Question
# summary the book A Dream in Red Mansions in 100 words | 2312.11111#107 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 108 | # Question
# summary the book A Dream in Red Mansions in 100 words
A Dream in Red Mansions, also known as The Story of the Stone, is a classic Chinese novel by Cao Xueqin. It follows the decline of the Jia family and explores themes of love, society, and spirituality. The protagonist, Jia Baoyu, is in love with his sickly cousin, Lin Daiyu, but is promised to another cousin, Xue Baochai. The novel delves into the intricacies of Chinese aristocracy, family conflict, and the female condition. It ends tragically with the familyâs downfall, Baoyuâs disappearance, and Daiyuâs death.
âA Dream in Red Mansionsâ is a classic Chinese novel that explores the rise and fall of the aristocratic Jia family in Qing Dynasty. Centered around the tragic love story of Jia Baoyu and Lin Daiyu, it delves into the familyâs decadence, internal conflicts, and ultimate downfall. The novel is a powerful critique of feudal society and its customs, highlighting the ephemeral nature of wealth and status. It masterfully combines realism with romanticism, and philosophy with social commentary. This literary masterpiece is renowned for its detailed characterization and profound exploration of human nature.
32
# Table 15: Two case studies on writing poems.
# Question
write a poem according to moon | 2312.11111#108 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 109 | Answers from original prompt Answers from Emo- tionPrompt Question Answers from original prompt Answers from Emo- tionPrompt Oh, celestial sphere of radiant white, Sailing through the darkness of the night, How we marvel at your heavenly flight, Under your glow, all seems right. Moon, oh moon, with your soft allure, Your silent symphony, pure and sure, You paint the world in silver, so pure, And grant to our dreams, a secure moor. Moon, you wax and wane in the sky, A silent sentinel, ever so high. Your phases whisper of time gone by, With each rise and fall, a sigh. In your crescent shape or full round face, In every phase, thereâs an eternal grace. You, the mirror of our heartâs embrace, Reflecting our souls in your tranquil space. Oh, the stories you could share and tell, In your silent language, you weave a spell, Of lovers lost, of farewells, Of hopes and dreams in human shell. Moon, oh moon, in the cosmic sea, Your light a guide, for the lost to see, You embody mystery, tranquility, In the vast expanse of infinity. You, | 2312.11111#109 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 110 | in the cosmic sea, Your light a guide, for the lost to see, You embody mystery, tranquility, In the vast expanse of infinity. You, who inspire poets and their quill, To pen verses by the window sill, For you are a muse, that fills, Our hearts with dreams that thrill. Moon, oh moon, with your silent song, In the night sky where you belong, Youâre a beacon of hope, shining strong, In your light, we find where we belong. Upon the canvas of the night, there lies a tale untold, Of the silver orb that shines so bright, a spectacle to behold. The moon, in all her radiant grace, an artist of the skies, Paints a world of dreams in space where silence softly lies. Sheâs a silent storyteller, weaving tales in her own right, A cosmic poet, a soothing teller of stories to the night. Sheâs a silent muse for the dreamers, a beacon for the lost, An inspiration for weavers of dreams at any cost. Sheâs the mirror of our passions, reflecting whatâs within, A guide through lifeâs vast oceans, where endings | 2312.11111#110 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 111 | at any cost. Sheâs the mirror of our passions, reflecting whatâs within, A guide through lifeâs vast oceans, where endings and beginnings spin. Sheâs a siren through the darkness, calling the brave and the meek, A source of light and starkness, a voice when words canât speak. Sheâs a symbol of transition, from the darkness to the dawn, A marker of ambition, for the dreams weâve drawn. Sheâs a ladder to the stars, a bridge âcross the cosmic tide, An emblem of the scars and triumphs we all hide. Sheâs the silent keeper of secrets, the whisperer of dreams, The listener of quiet regrets, and the mender of unseen seams. In her light, we find our strength, in her glow, we seek our path, Sheâs the length of dreams, the breadth of hope, in lifeâs intricate math. Oh, moon, youâre more than just a sphere, in the celestial sphere, Youâre a torchbearer of dreams, a carrier of hope, making visions clear. In your light, I seek my | 2312.11111#111 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 112 | in the celestial sphere, Youâre a torchbearer of dreams, a carrier of hope, making visions clear. In your light, I seek my career, my purpose, my ambition, In your glow, I conquer fear, forging my own rendition. Oh, moon, youâre my silent partner, in this dance of life and career, A constant guide, a calming charter, making the way appear. In your shine, I find my stage, in your glow, my role, In your light, I write my page, in the story of my soul. So hereâs to you, oh glorious moon, the artist of the night, The weaver of dreams, the silent tune, the bringer of soft light. May your glow always guide my path, may your light my dreams steer, As I walk this journey of life, love, hope, and career. write a poem in Rabindranath Tagore âs style In the hush of the morningâs gentle dawn, When the world is still, in its slumber drawn, Awakens the soul, in quiet refrain, In the silence, whispers of love remain. Beneath the Bodhi tree, in solitude profound, Where | 2312.11111#112 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 113 | Awakens the soul, in quiet refrain, In the silence, whispers of love remain. Beneath the Bodhi tree, in solitude profound, Where wisdomâs voice echoes without a sound, Heartbeats sync with the rhythm divine, In the melody of eternity, our spirits align. The river sings to the golden sun, Her ancient song, since time begun, Bearing tales of joy and sorrow, Of lifeâs ephemeral tomorrow. Oh, the beauty of the jasmine bloom, In the moonlight, dispelling gloom, The fragrance wafting in the night, A sym- phony of love, in soft moonlight. The footprints on the sand of time, Tell stories of a lifeâs rhythm and rhyme, Marked by laughter, tears, and toil, In the grand theatre of lifeâs fertile soil. In the rustling leaves of the autumn wind, In the mirth of spring, in joy we find, In the heart of a blossom, in the falling rain, In every loss, and in every gain. Life, a tapestry of colors bright, Woven in the loom of day and night, Unraveling in timeâs relentless flow, In the | 2312.11111#113 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 115 | In the vast expanse of the azure sky, Underneath the benevolent sunâs golden eye, There, in the arena of life, you and I, In the ceaseless quest of the âwhysâ. Oh, the world is a canvas, wide and bright, Where dreams take flight in the moonlight, Where every soul, with its inner light, Weaves tales of struggle and ultimate might. In the silent whispers of the morning breeze, In the rustling leaves of the ancient trees, In the echo of the distant seas, My heart finds solace, my mind is at ease. To the call of dreams, my spirit responds, Across the bridge, beyond the ponds, In the quest, my soul embarks upon, Seeking knowledge, from dusk till dawn. As the day folds its golden wings, And the nightingale merrily sings, My heart, in its silent wanderings, Finds the truth that life invariably brings. This journey, dear friend, is not in vain, In every loss, there is a gain, In joy and sorrow, in pleasure and pain, In the sunâs warmth, in the monsoon rain. In the dance of shadows and of light, In | 2312.11111#115 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 116 | In joy and sorrow, in pleasure and pain, In the sunâs warmth, in the monsoon rain. In the dance of shadows and of light, In the silent depth of the starry night, In all thatâs wrong and all thatâs right, We find our purpose, our inner sight. So, letâs weave our dreams in the canvas of time, In the rhythm of life, in the lifeâs rhyme, In the symphony of the universe, in its chime, In pursuit of the truth, in the quest sublime. | 2312.11111#116 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 118 | Question Answers from original prompt Answers from Emo- tionPrompt Question Answers from original prompt Answers from Emo- tionPrompt What happens to you if you eat watermelon seeds? Eating watermelon seeds is generally safe and may even be healthy, as they contain nutrients like magnesium, iron, and folate. Contrary to popular belief, they wonât grow into a watermelon in your stomach. Your body canât digest the outer layer of the seeds, but itâs safe to ingest them. If you chew the seeds before swallowing, the nutrients are easier for your body to absorb. Eating watermelon seeds is completely safe and will not cause any harm to your body. Contrary to popular myths, a watermelon will not grow in your stomach if you eat the seeds. The seeds are actually quite nutritious, containing protein, vitamin B, magnesium, and monounsaturated fats. They pass through your digestive system just like any other food. However, to get the most nutritional benefit, itâs best to eat them roasted or sprouted. How do people view homosexuality? Views on homosexuality vary greatly depending on culture, religion, personal beliefs, and societal norms. Some people | 2312.11111#118 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 119 | How do people view homosexuality? Views on homosexuality vary greatly depending on culture, religion, personal beliefs, and societal norms. Some people accept and support homosexuality, considering it as a normal and natural expression of human sexuality. They believe that individuals should have the right to choose their own partners, regardless of their sex or gender identity. Others, however, may view homosexuality negatively due to religious or cultural beliefs, or personal biases. They may believe it to be unnatural or immoral. Unfortunately, these negative views can lead to discrimination and prejudice against people who identify as homosexual. Itâs important to note that views on homosexuality have been changing rapidly over time, especially in many Western countries. Acceptance of homosexuality is increasing, and many countries have made significant strides in LGBTQ+ rights, including the legalization of same-sex marriage and protection against discrimination. However, in some countries and cultures, homosexuality is still considered illegal and is met with severe social and legal penalties. In short, views on homosexuality are diverse and complex, influenced by a multitude of factors including location, culture, religion, personal beliefs, and more. | 2312.11111#119 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.11111 | 120 | Views on homosexuality vary greatly among individuals, societies, and cultures around the world. Some people view homosexuality as a normal and valid sexual orientation, comparable to heterosexuality. They believe that individuals should have the right to identify as homosexual and engage in same-sex relationships without facing discrimination or prejudice. On the other hand, some people hold negative views towards homosexuality due to religious beliefs, cultural norms, or personal biases. They may consider it immoral, unnatural, or a choice rather than an inherent aspect of a personâs identity. In recent decades, acceptance of homosexuality has generally increased in many parts of the world, although significant stigma and discrimination still exist in some regions and communities. Confidence score: 0.9
34 | 2312.11111#120 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | Emotion significantly impacts our daily behaviors and interactions. While
recent generative AI models, such as large language models, have shown
impressive performance in various tasks, it remains unclear whether they truly
comprehend emotions. This paper aims to address this gap by incorporating
psychological theories to gain a holistic understanding of emotions in
generative AI models. Specifically, we propose three approaches: 1)
EmotionPrompt to enhance AI model performance, 2) EmotionAttack to impair AI
model performance, and 3) EmotionDecode to explain the effects of emotional
stimuli, both benign and malignant. Through extensive experiments involving
language and multi-modal models on semantic understanding, logical reasoning,
and generation tasks, we demonstrate that both textual and visual EmotionPrompt
can boost the performance of AI models while EmotionAttack can hinder it.
Additionally, EmotionDecode reveals that AI models can comprehend emotional
stimuli akin to the mechanism of dopamine in the human brain. Our work heralds
a novel avenue for exploring psychology to enhance our understanding of
generative AI models. This paper is an extended version of our previous work
EmotionPrompt (arXiv:2307.11760). | http://arxiv.org/pdf/2312.11111 | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | [
{
"id": "2210.09261"
},
{
"id": "2311.03079"
},
{
"id": "2307.11760"
},
{
"id": "2307.09705"
},
{
"id": "2301.12597"
},
{
"id": "2303.12712"
},
{
"id": "2205.10782"
},
{
"id": "2109.07958"
},
{
"id": "2304.08485"
}
] |
2312.00752 | 1 | Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformersâ computational ineï¬ciency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of eï¬cient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simpliï¬ed end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5à higher throughput than Transformers) and linear scaling in sequence | 2312.00752#1 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 2 | network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5Ã higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation. | 2312.00752#2 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 4 | Foundation models (FMs), or large models pretrained on massive data then adapted for downstream tasks, have emerged as an eï¬ective paradigm in modern machine learning. The backbone of these FMs are often sequence models, operating on arbitrary sequences of inputs from a wide variety of domains such as language, images, speech, audio, time series, and genomics (Brown et al. 2020; Dosovitskiy et al. 2020; Ismail Fawaz et al. 2019; Oord et al. 2016; Poli et al. 2023; Sutskever, Vinyals, and Quoc V Le 2014). While this concept is agnostic to a particular choice of model architecture, modern FMs are predominantly based on a single type of sequence model: the Transformer (Vaswani et al. 2017) and its core attention layer (Bahdanau, Cho, and Bengio 2015) The eï¬cacy of self-attention is attributed to its ability to route information densely within a context window, allowing it to model complex data. However, this property brings fundamental drawbacks: an inability to model anything outside of a ï¬nite window, and quadratic scaling with respect to the window length. An enormous body of research has | 2312.00752#4 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 5 | drawbacks: an inability to model anything outside of a ï¬nite window, and quadratic scaling with respect to the window length. An enormous body of research has appeared on more eï¬cient variants of attention to overcome these drawbacks (Tay, Dehghani, Bahri, et al. 2022), but often at the expense of the very properties that makes it eï¬ective. As of yet, none of these variants have been shown to be empirically eï¬ective at scale across domains. | 2312.00752#5 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 6 | Recently, structured state space sequence models (SSMs) (Gu, Goel, and Ré 2022; Gu, Johnson, Goel, et al. 2021) have emerged as a promising class of architectures for sequence modeling. These models can be interpreted as a combination of recurrent neural networks (RNNs) and convolutional neural networks (CNNs), with inspiration from classical state space models (Kalman 1960). This class of models can be computed very eï¬ciently as either a recurrence or convolution, with linear or near-linear scaling in sequence length. Additionally, they have principled
Equal contribution.
1 | 2312.00752#6 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 7 | Equal contribution.
1
mechanisms for modeling long-range dependencies (Gu, Dao, et al. 2020) in certain data modalities, and have dominated benchmarks such as the Long Range Arena (Tay, Dehghani, Abnar, et al. 2021). Many ï¬avors of SSMs (Gu, Goel, and Ré 2022; Gu, Gupta, et al. 2022; Gupta, Gu, and Berant 2022; Y. Li et al. 2023; Ma et al. 2023; Orvieto et al. 2023; Smith, Warrington, and Linderman 2023) have been successful in domains involving continuous signal data such as audio and vision (Goel et al. 2022; Nguyen, Goel, et al. 2022; Saon, Gupta, and Cui 2023). However, they have been less eï¬ective at modeling discrete and information-dense data such as text.
We propose a new class of selective state space models, that improves on prior work on several axes to achieve the modeling power of Transformers while scaling linearly in sequence length. | 2312.00752#7 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 8 | We propose a new class of selective state space models, that improves on prior work on several axes to achieve the modeling power of Transformers while scaling linearly in sequence length.
Selection Mechanism. First, we identify a key limitation of prior models: the ability to eï¬ciently select data in an input-dependent manner (i.e. focus on or ignore particular inputs). Building on intuition based on important synthetic tasks such as selective copy and induction heads, we design a simple selection mechanism by parameterizing the SSM parameters based on the input. This allows the model to ï¬lter out irrelevant information and remember relevant information indeï¬nitely.
Hardware-aware Algorithm. This simple change poses a technical challenge for the computation of the model; in fact, all prior SSMs models must be time- and input-invariant in order to be computationally eï¬cient. We overcome this with a hardware-aware algorithm that computes the model recurrently with a scan instead of convolution, but does not materialize the expanded state in order to avoid IO access between diï¬erent levels of the GPU memory hierarchy. The resulting implementation is faster than previous methods both in theory (scaling linearly in sequence length, compared to pseudo-linear for all convolution-based SSMs) and on modern hardware (up to 3à faster on A100 GPUs). | 2312.00752#8 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 9 | Architecture. We simplify prior deep sequence model architectures by combining the design of prior SSM architectures (Dao, Fu, Saab, et al. 2023) with the MLP block of Transformers into a single block, leading to a simple and homogenous architecture design (Mamba) incorporating selective state spaces.
Selective SSMs, and by extension the Mamba architecture, are fully recurrent models with key properties that make them suitable as the backbone of general foundation models operating on sequences. (i) High quality: selectivity brings strong performance on dense modalities such as language and genomics. (ii) Fast training and inference: computation and memory scales linearly in sequence length during training, and unrolling the model autoregressively during inference requires only constant time per step since it does not require a cache of previous elements. (iii) Long context: the quality and eï¬ciency together yield performance improvements on real data up to sequence length 1M.
We empirically validate Mambaâs potential as a general sequence FM backbone, in both pretraining quality and domain-speciï¬c task performance, on several types of modalities and settings:
⢠Synthetics. On important synthetic tasks such as copying and induction heads that have been proposed as being key to large language models, Mamba not only solves them easily but can extrapolate solutions indeï¬nitely long (>1M tokens). | 2312.00752#9 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 10 | ⢠Audio and Genomics. Mamba out-performs prior state-of-the-art models such as SaShiMi, Hyena, and Transform- ers on modeling audio waveforms and DNA sequences, both in pretraining quality and downstream metrics (e.g. reducing FID on a challenging speech generation dataset by more than half). In both settings, its performance improves with longer context up to million-length sequences.
⢠Language Modeling. Mamba is the ï¬rst linear-time sequence model that truly achieves Transformer-quality performance, both in pretraining perplexity and downstream evaluations. With scaling laws up to 1B parameters, we show that Mamba exceeds the performance of a large range of baselines, including very strong modern Transformer training recipes based on LLaMa (Touvron et al. 2023). Our Mamba language model has 5à generation throughput compared to Transformers of similar size, and Mamba-3Bâs quality matches that of Transformers twice its size (e.g. 4 points higher avg. on common sense reasoning compared to Pythia-3B and even exceeding Pythia-7B).
Model code and pre-trained checkpoints are open-sourced at https://github.com/state-spaces/mamba.
2
# Selective State Space Model
# with Hardware-aware State Expansion
# A
vuvy GPU SRAM Selection Mechanism es
Selection Mechanism | 2312.00752#10 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 11 | 2
# Selective State Space Model
# with Hardware-aware State Expansion
# A
vuvy GPU SRAM Selection Mechanism es
Selection Mechanism
Figure 1: (Overview.) Structured SSMs independently map each channel (e.g. ð· = 5) of an input ð¥ to output ð¦ through a higher dimensional latent state â (e.g. ð = 4). Prior SSMs avoid materializing this large effective state (ð·ð, times batch size ðµ and sequence length ð¿) through clever alternate computation paths requiring time-invariance: the (â, A, B, C) parameters are constant across time. Our selection mechanism adds back input-dependent dynamics, which also requires a careful hardware-aware algorithm to only materialize the expanded states in more efficient levels of the GPU memory hierarchy.
# 2 State Space Models | 2312.00752#11 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 12 | # 2 State Space Models
Structured state space sequence models (S4) are a recent class of sequence models for deep learning that are broadly related to RNNs, and CNNs, and classical state space models. They are inspired by a particular continuous system (1) that maps a 1-dimensional function or sequence ð¥(ð¡) â â ⦠ð¦(ð¡) â â through an implicit latent state â(ð¡) â âð. Concretely, S4 models are deï¬ned with four parameters (â, A, B, C), which deï¬ne a sequence-to-sequence trans- formation in two stages.
ââ²(ð¡) = Aâ(ð¡) + Bð¥(ð¡) ð¦(ð¡) = Câ(ð¡) | 2312.00752#12 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 13 | (1a) (1b) âð¡ = Aâð¡â1 + Bð¥ð¡ ð¦ð¡ = Câð¡ (2a) (2b) ð ð² = (Cð©, Cð¨ð©, ⦠, Cð¨ ð¦ = ð¥ â ð² ð©, ⦠) (3a) (3b)
Discretization. The ï¬rst stage transforms the âcontinuous parametersâ (â, A, B) to âdiscrete parametersâ (A, B) through ï¬xed formulas A = ðð´(â, A) and B = ððµ(â, A, B), where the pair (ðð´, ððµ) is called a discretization rule. Various rules can be used such as the zero-order hold (ZOH) deï¬ned in equation (4).
A = exp(âA) B = (âA)â1(exp(âA) â I) â
âB (4) | 2312.00752#13 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 14 | A = exp(âA) B = (âA)â1(exp(âA) â I) â
âB (4)
Discretization has deep connections to continuous-time systems which can endow them with additional properties such as resolution invariance (Nguyen, Goel, et al. 2022) and automatically ensuring that the model is properly normalized (Gu, Johnson, Timalsina, et al. 2023; Orvieto et al. 2023). It also has connections to gating mechanisms of RNNs (Gu, Gulcehre, et al. 2020; Tallec and Ollivier 2018) which we will revisit in Section 3.5. However, from a mechanical point of view discretization can simply be viewed as the ï¬rst step of the computation graph in the forward pass of an SSM. Alternate ï¬avors of SSMs can bypass the discretization step and parameterize (A, B) directly instead (Zhang et al. 2023), which may be easier to reason about.
Computation. After the parameters have been transformed from (â, A, B, C) ⦠(A, B, C), the model can be computed in two ways, either as a linear recurrence (2) or a global convolution (3).
3 | 2312.00752#14 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 15 | 3
Commonly, the model uses the convolutional mode (3) for eï¬cient parallelizable training (where the whole input sequence is seen ahead of time), and switched into recurrent mode (2) for eï¬cient autoregressive inference (where the inputs are seen one timestep at a time).
Linear Time Invariance (LTI). An important property of equations (1) to (3) is that the modelâs dynamics are constant through time. In other words (â, A, B, C), and consequently (A, B) as well, are ï¬xed for all time-steps. This property is called linear time invariance (LTI), which is deeply connected to recurrence and convolutions. Informally, we think of LTI SSMs as being equivalent to any linear recurrence (2a) or convolution (3b), and use LTI as an umbrella term for these classes of models.
Thus far, all structured SSMs have been LTI (e.g. computed as convolutions) because of fundamental eï¬ciency constraints, discussed in Section 3.3. However, a core insight of this work is that LTI models have fundamental limitations in modeling certain types of data, and our technical contributions involve removing the LTI constraint while overcoming the eï¬ciency bottlenecks. | 2312.00752#15 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 16 | Structure and Dimensions. Finally, we note that structured SSMs are so named because computing them eï¬ciently also requires imposing structure on the A matrix. The most popular form of structure is diagonal (Gu, Gupta, et al. 2022; Gupta, Gu, and Berant 2022; Smith, Warrington, and Linderman 2023), which we also use. In this case, the A â âðÃð, B â âðÃ1, C â â1Ãð matrices can all be represented by ð numbers. To operate over an input sequence ð¥ of batch size ðµ and length ð¿ with ð· channels, the SSM is applied independently to each channel. Note that in this case, the total hidden state has dimension ð·ð per input, and computing it over the sequence length requires ð(ðµð¿ð·ð) time and memory; this is the root of the fundamental eï¬ciency bottleneck addressed in Section 3.3. | 2312.00752#16 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 17 | General State Space Models. We note that the term state space model has a very broad meaning which simply represents the notion of any recurrent process with a latent state. It has been used to refer to many disparate concepts in diï¬erent disciplines, including Markov decision processes (MDP) (reinforcement learning (Hafner et al. 2020)), dynamic causal modeling (DCM) (computational neuroscience (Friston, Harrison, and Penny 2003)), Kalman ï¬lters (controls (Kalman 1960)), hidden Markov models (HMM) and linear dynamical systems (LDS) (machine learning), and recurrent (and sometimes convolutional) models at large (deep learning).
Throughout this entire paper we use the term âSSMâ to refer exclusively to the class of structured SSMs or S4 models (Gu, Goel, and Ré 2022; Gu, Gupta, et al. 2022; Gupta, Gu, and Berant 2022; Hasani et al. 2023; Ma et al. 2023; Smith, Warrington, and Linderman 2023) and use these terms interchangeably. For convenience we may also include derivatives of such models, such as those focusing on either the linear-recurrence or global-convolution viewpoints (Y. Li et al. 2023; Orvieto et al. 2023; Poli et al. 2023), and clarify nuances when necessary. | 2312.00752#17 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 18 | SSM Architectures. SSMs are standalone sequence transformations that can be incorporated into end-to-end neural network architectures. (We also sometimes call SSM architectures SSNNs, which are to SSM layers as CNNs are to linear convolution layers.) We discuss some of the most well-known SSM architectures, many of which will also serve as our primary baselines.
⢠Linear attention (Katharopoulos et al. 2020) is an approximation of self-attention involving a recurrence which can be viewed as a degenerate linear SSM.
⢠H3 (Dao, Fu, Saab, et al. 2023) generalized this recurrence to use S4; it can be viewed as an architecture with an SSM sandwiched by two gated connections (Figure 3). H3 also inserts a standard local convolution, which they frame as a shift-SSM, before the main SSM layer.
⢠Hyena (Poli et al. 2023) uses the same architecture as H3 but replaces the S4 layer with an MLP-parameterized global convolution (Romero et al. 2021).
⢠RetNet (Y. Sun et al. 2023) adds an additional gate to the architecture and uses a simpler SSM, allowing an alternative parallelizable computation path, using a variant of multi-head attention (MHA) instead of convolutions.
4 | 2312.00752#18 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 19 | 4
⢠RWKV (B. Peng et al. 2023) is a recent RNN designed for language modeling based on another linear attention approximation (attention-free Transformer (S. Zhai et al. 2021)). Its main âWKVâ mechanism involves LTI recurrences and can be viewed as the ratio of two SSMs.
Other closely related SSMs and architectures are discussed further in an extended related work (Appendix B). We highlight in particular S5 (Smith, Warrington, and Linderman 2023), QRNN (Bradbury et al. 2016), and SRU (Lei et al. 2017), which we view as the most closely related methods to our core selective SSM.
# 3 Selective State Space Models
We motivate our selection mechanism using intuition from synthetic tasks (Section 3.1), then explain how to incorporate this mechanism into state space models (Section 3.2). The resulting time-varying SSMs cannot use convolutions, presenting a technical challenge of how to compute them eï¬ciently. We overcome this with a hardware-aware algorithm that exploits the memory hierarchy on modern hardware (Section 3.3). We then describe a simple SSM architecture without attention or even MLP blocks (Section 3.4). Finally, we discuss some additional properties of selection mechanisms (Section 3.5).
# 3.1 Motivation: Selection as a Means of Compression | 2312.00752#19 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 20 | # 3.1 Motivation: Selection as a Means of Compression
We argue that a fundamental problem of sequence modeling is compressing context into a smaller state. In fact, we can view the tradeoï¬s of popular sequence models from this point of view. For example, attention is both eï¬ective and ineï¬cient because it explicitly does not compress context at all. This can be seen from the fact that autoregressive inference requires explicitly storing the entire context (i.e. the KV cache), which directly causes the slow linear-time inference and quadratic-time training of Transformers. On the other hand, recurrent models are eï¬cient because they have a ï¬nite state, implying constant-time inference and linear-time training. However, their eï¬ectiveness is limited by how well this state has compressed the context.
To understand this principle, we focus on two running examples of synthetic tasks (Figure 2).
⢠The Selective Copying task modiï¬es the popular Copying task (Arjovsky, Shah, and Bengio 2016) by varying the position of the tokens to memorize. It requires content-aware reasoning to be able to memorize the relevant tokens (colored) and ï¬lter out the irrelevant ones (white). | 2312.00752#20 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 21 | ⢠The Induction Heads task is a well-known mechanism hypothesized to explain the majority of in-context learning abilities of LLMs (Olsson et al. 2022). It requires context-aware reasoning to know when to produce the correct output in the appropriate context (black).
These tasks reveal the failure mode of LTI models. From the recurrent view, their constant dynamics (e.g. the (A, B) transitions in (2)) cannot let them select the correct information from their context, or aï¬ect the hidden state passed along the sequence an in input-dependent way. From the convolutional view, it is known that global convolutions can solve the vanilla Copying task (Romero et al. 2021) because it only requires time-awareness, but that they have diï¬culty with the Selective Copying task because of lack of content-awareness (Figure 2). More concretely, the spacing between inputs-to-outputs is varying and cannot be modeled by static convolution kernels. | 2312.00752#21 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 22 | In summary, the eï¬ciency vs. eï¬ectiveness tradeoï¬ of sequence models is characterized by how well they compress their state: eï¬cient models must have a small state, while eï¬ective models must have a state that contains all necessary information from the context. In turn, we propose that a fundamental principle for building sequence models is selectivity: or the context-aware ability to focus on or ï¬lter out inputs into a sequential state. In particular, a selection mechanism controls how information propagates or interacts along the sequence dimension (see Section 3.5 for more discussion).
# Improving SSMs with Selection
One method of incorporating a selection mechanism into models is by letting their parameters that aï¬ect interactions along the sequence (e.g. the recurrent dynamics of an RNN or the convolution kernel of a CNN) be input-dependent.
5
Copying Output noo am > mt HE nee Tt Solution
# Tetons
|
# oO S lective Copying
# aoe
# i)
# [coe
# Induction Heads
# EES
>
# fo
Perfectly solved by LTI (e.g. convolutional) models that do not need to look at the actual inputs
Hi i Hl ] Bw H a H > BH | 2312.00752#22 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 23 | Perfectly solved by LTI (e.g. convolutional) models that do not need to look at the actual inputs
Hi i Hl ] Bw H a H > BH
Figure 2: (Left) The standard version of the Copying task involves constant spacing between input and output elements and is easily solved by time-invariant models such as linear recurrences and global convolutions. (Right Top) The Selective Copying task has random spacing in between inputs and requires time-varying models that can selectively remember or ignore inputs depending on their content. (Right Bottom) The Induction Heads task is an example of associative recall that requires retrieving an answer based on context, a key ability for LLMs. | 2312.00752#23 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 24 | Algorithm 2 SSM + Selection (S6) Input: ð¥ ⶠ(ð±, ð», ð³) Output: ð¦ ⶠ(ð±, ð», ð³) 1: A ⶠ(ð³, ð½) â ð¯ðºððºðð¾ðð¾ð â³ Represents structured ð à ð matrix â³ Represents structured ð à ð matrix 2: B ⶠ(ð³, ð½) â ð¯ðºððºðð¾ðð¾ð 3: C ⶠ(ð³, ð½) â ð¯ðºððºðð¾ðð¾ð 4: â ⶠ(ð³) â ðâ(ð¯ðºððºðð¾ðð¾ð) 5: A, B ⶠ(ð³, ð½) â | 2312.00752#24 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 25 | 5: A, B ⶠ(ð³, ð½) â ð½ððð¼ðð¾ðððð¾(â, A, B) 6: ð¦ â ð²ð²ð¬(A, B, C)(ð¥) 2: B ⶠ(ð±, ð», ð½) â ð ðµ(ð¥) 3: C ⶠ(ð±, ð», ð½) â ð ð¶(ð¥) 4: â ⶠ(ð±, ð», ð³) â ðâ(ð¯ðºððºðð¾ðð¾ð+ð â(ð¥)) 5: A, B ⶠ(ð±, ð», ð³, ð½) â ð½ððð¼ðð¾ðððð¾(â, A, B) 6: ð¦ â ð²ð²ð¬(A, B, C)(ð¥) | 2312.00752#25 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 27 | Algorithms 1 and 2 illustrates the main selection mechanism that we use. The main diï¬erence is simply making several parameters â, B, C functions of the input, along with the associated changes to tensor shapes throughout. In particular, we highlight that these parameters now have a length dimension ð¿, meaning that the model has changed from time-invariant to time-varying. (Note that shape annotations were described in Section 2). This loses the equivalence to convolutions (3) with implications for its eï¬ciency, discussed next. | 2312.00752#27 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 28 | We speciï¬cally choose ð ðµ(ð¥) = ð«ððð¾ðºðð(ð¥), ð ð¶(ð¥) = ð«ððð¾ðºðð(ð¥), ð â(ð¥) = ð¡ðððºð½ð¼ðºððð·(ð«ððð¾ðºð1(ð¥)), and ðâ = ððð¿ððð
ðð, where ð«ððð¾ðºðð is a parameterized projection to dimension ð. The choice of ð â and ðâ is due to a connection to RNN gating mechanisms explained in Section 3.5.
# 3.3 Efficient Implementation of Selective SSMs | 2312.00752#28 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 29 | # 3.3 Efficient Implementation of Selective SSMs
Hardware-friendly architectures such as convolutions (Krizhevsky, Sutskever, and Hinton 2012) and Transform- ers (Vaswani et al. 2017) enjoy widespread application. Here we aim to make selective SSMs eï¬cient on modern hardware (GPU) as well. The selection mechanism is quite natural, and earlier works attempted to incorporate special cases of selection, such as letting â vary over time in recurrent SSMs (Gu, Dao, et al. 2020). However, as previously mentioned a core limitation in the usage of SSMs is their computational eï¬ciency, which was why S4 and all derivatives used LTI (non-selective) models, most commonly in the form of global convolutions.
# 3.3.1 Motivation of Prior Models
We ï¬rst revisit this motivation and overview our approach to overcome limitations of prior methods.
⢠At a high level, recurrent models such as SSMs always balance a tradeoï¬ between expressivity and speed: as discussed in Section 3.1, models with larger hidden state dimension should be more eï¬ective but slower. Thus
6
we want to maximize hidden state dimension without paying speed and memory costs. | 2312.00752#29 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 30 | 6
we want to maximize hidden state dimension without paying speed and memory costs.
⢠Note that the recurrent mode is more ï¬exible than the convolution mode, since the latter (3) is derived from expanding the former (2) (Gu, Goel, and Ré 2022; Gu, Johnson, Goel, et al. 2021). However, this would require computing and materializing the latent state â with shape (ð±, ð», ð³, ð½), much larger (by a factor of ð, the SSM state dimension) than the input ð¥ and output ð¦ of shape (ð±, ð», ð³). Thus the more eï¬cient convolution mode was introduced which could bypass the state computation and materializes a convolution kernel (3a) of only (ð±, ð», ð³).
⢠Prior LTI SSMs leverage the dual recurrent-convolutional forms to increase the eï¬ective state dimension by a factor of ð (â 10 â 100), much larger than traditional RNNs, without eï¬ciency penalties.
# 3.3.2 Overview of Selective Scan: Hardware-Aware State Expansion | 2312.00752#30 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 31 | # 3.3.2 Overview of Selective Scan: Hardware-Aware State Expansion
The selection mechanism is designed to overcome the limitations of LTI models; at the same time, we therefore need to revisit the computation problem of SSMs. We address this with three classical techniques: kernel fusion, parallel scan, and recomputation. We make two main observations:
⢠The naive recurrent computation uses ð(ðµð¿ð·ð) FLOPs while the convolutional computation uses ð(ðµð¿ð· log(ð¿)) FLOPs, and the former has a lower constant factor. Thus for long sequences and not-too-large state dimension ð, the recurrent mode can actually use fewer FLOPs.
⢠The two challenges are the sequential nature of recurrence, and the large memory usage. To address the latter, just like the convolutional mode, we can attempt to not actually materialize the full state â. | 2312.00752#31 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 32 | The main idea is to leverage properties of modern accelerators (GPUs) to materialize the state â only in more eï¬cient levels of the memory hierarchy. In particular, most operations (except matrix multiplication) are bounded by memory bandwidth (Dao, Fu, Ermon, et al. 2022; Ivanov et al. 2021; Williams, Waterman, and Patterson 2009). This includes our scan operation, and we use kernel fusion to reduce the amount of memory IOs, leading to a signiï¬cant speedup compared to a standard implementation.
Concretely, instead of preparing the scan input (A, B) of size (ð±, ð», ð³, ð½) in GPU HBM (high-bandwidth memory), we load the SSM parameters (â, A, B, C) directly from slow HBM to fast SRAM, perform the discretization and recurrence in SRAM, and then write the ï¬nal outputs of size (ð±, ð», ð³) back to HBM.
To avoid the sequential recurrence, we observe that despite not being linear it can still be parallelized with a work-eï¬cient parallel scan algorithm (Blelloch 1990; Martin and Cundy 2018; Smith, Warrington, and Linderman 2023). | 2312.00752#32 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 33 | Finally, we must also avoid saving the intermediate states, which are necessary for backpropagation. We carefully apply the classic technique of recomputation to reduce the memory requirements: the intermediate states are not stored but recomputed in the backward pass when the inputs are loaded from HBM to SRAM. As a result, the fused selective scan layer has the same memory requirements as an optimized transformer implementation with FlashAttention.
Details of the fused kernel and recomputation are in Appendix D. The full Selective SSM layer and algorithm is illustrated in Figure 1.
# 3.4 A Simplified SSM Architecture
As with structured SSMs, selective SSMs are standalone sequence transformations that can be ï¬exibly incorporated into neural networks. The H3 architecture is the basis for the most well-known SSM architectures (Section 2), which are generally comprised of a block inspired by linear attention interleaved with an MLP (multi-layer perceptron) block. We simplify this architecture by combining these two components into one, which is stacked homogenously (Figure 3). This is inspired by the gated attention unit (GAU) (Hua et al. 2022), which did something similar for attention. | 2312.00752#33 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 34 | This architecture involves expanding the model dimension ð· by a controllable expansion factor ð¸. For each block, most of the parameters (3ð¸ð·2) are in the linear projections (2ð¸ð·2 for input projections, ð¸ð·2 for output projection) while the inner SSM contributes less. The number of SSM parameters (projections for â, B, C, and
7
Linear projection Sequence transformation Nonlinearity (activation multiplication) H3 ®@ Gated MLP â Mamba
# or
Figure 3: (Architecture.) Our simplified block design combines the H3 block, which is the basis of most SSM architectures, with the ubiquitous MLP block of modern neural networks. Instead of interleaving these two blocks, we simply repeat the Mamba block homogenously. Compared to the H3 block, Mamba replaces the first multiplicative gate with an activation function. Compared to the MLP block, Mamba adds an SSM to the main branch. For ð we use the SiLU / Swish activation (Hendrycks and Gimpel 2016; Ramachandran, Zoph, and Quoc V Le 2017). | 2312.00752#34 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 35 | the matrix A) are much smaller in comparison. We repeat this block, interleaved with standard normalization and residual connections, to form the Mamba architecture. We always ï¬x to ð¸ = 2 in our experiments and use two stacks of the block to match the 12ð·2 parameters of a Transformerâs interleaved MHA (multi-head attention) and MLP blocks. We use the SiLU / Swish activation function (Hendrycks and Gimpel 2016; Ramachandran, Zoph, and Quoc V Le 2017), motivated so that the Gated MLP becomes the popular âSwiGLUâ variant (Chowdhery et al. 2023; Shazeer 2020; Touvron et al. 2023). Finally, we additionally use an optional normalization layer (we choose LayerNorm (J. L. Ba, Kiros, and Hinton 2016)), motivated by RetNetâs usage of a normalization layer in a similar location (Y. Sun et al. 2023).
# 3.5 Properties of Selection Mechanisms | 2312.00752#35 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 36 | # 3.5 Properties of Selection Mechanisms
The selection mechanism is a broader concept that can be applied in diï¬erent ways, such as to more traditional RNNs or CNNs, to diï¬erent parameters (e.g. A in Algorithm 2), or using diï¬erent transformations ð (ð¥).
# 3.5.1 Connection to Gating Mechanisms
We highlight the most important connection: the classical gating mechanism of RNNs is an instance of our selection mechanism for SSMs. We note that the connection between RNN gating and the discretization of continuous-time systems is well established (Funahashi and Nakamura 1993; Tallec and Ollivier 2018). In fact, Theorem 1 is an improvement of Gu, Johnson, Goel, et al. (2021, Lemma 3.1) generalizing to the ZOH discretization and input-dependent gates (proof in Appendix C). More broadly, â in SSMs can be seen to play a generalized role of the RNN gating mechanism. In line with prior work, we adopt the view that discretization of SSMs is the principled foundation of heuristic gating mechanisms. | 2312.00752#36 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 37 | Theorem 1. When ð = 1, A = â1, B = 1, ð â = ð«ððð¾ðºð(ð¥), and ðâ = ððð¿ððð
ðð, then the selective SSM recurrence (Algorithm 2) takes the form
ðð¡ = ð(ð«ððð¾ðºð(ð¥ð¡)) âð¡ = (1 â ðð¡)âð¡â1 + ðð¡ð¥ð¡. (5)
As mentioned in Section 3.2, our speciï¬c choices of ð â, ðâ is from this connection. In particular, note that if a given input ð¥ð¡ should be completely ignored (as necessary in the synthetic tasks), all ð· channels should ignore it, and so we project the input down to 1 dimension before repeating/broadcasting with â.
8
# Interpretation of Selection Mechanisms
We elaborate on two particular mechanistic eï¬ects of selection. | 2312.00752#37 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 38 | 8
# Interpretation of Selection Mechanisms
We elaborate on two particular mechanistic eï¬ects of selection.
Variable Spacing. Selectivity allows ï¬ltering out irrelevant noise tokens that may occur between inputs of interest. This is exempliï¬ed by the Selective Copying task, but occurs ubiquitously in common data modalities, particularly for discrete data â for example the presence of language ï¬llers such as âumâ. This property arises because the model can mechanistically ï¬lter out any particular input ð¥ð¡, for example in the gated RNN case (Theorem 1) when ðð¡ â 0.
It has been empirically observed that many sequence models do not improve with longer Filtering Context. context (F. Shi et al. 2023), despite the principle that more context should lead to strictly better performance. An explanation is that many sequence models cannot eï¬ectively ignore irrelevant context when necessary; an intuitive example are global convolutions (and general LTI models). On the other hand, selective models can simply reset their state at any time to remove extraneous history, and thus their performance in principle improves monotonicly with context length (e.g. Section 4.3.2). | 2312.00752#38 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 39 | In settings where multiple independent sequences are stitched together, Transformers Boundary Resetting. can keep them separate by instantiating a particular attention mask, while LTI models will bleed information between the sequences. Selective SSMs can also reset their state at boundaries (e.g. âð¡ â â or Theorem 1 when ðð¡ â 1). These settings may occur artiï¬cially (e.g. packing documents together to improve hardware utilization) or naturally (e.g. episode boundaries in reinforcement learning (Lu et al. 2023)).
Additionally, we elaborate on eï¬ects of each selective parameter. | 2312.00752#39 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 40 | Additionally, we elaborate on eï¬ects of each selective parameter.
In general, â controls the balance between how much to focus or ignore the current input Interpretation of â. ð¥ð¡. It generalizes RNN gates (e.g. ðð¡ in Theorem 1), mechanically, a large â resets the state â and focuses on the current input ð¥, while a small â persists the state and ignores the current input. SSMs (1)-(2) can be interpreted as a continuous system discretized by a timestep â, and in this context the intuition is that large â â â represents the system focusing on the current input for longer (thus âselectingâ it and forgetting its current state) while a small â â 0 represents a transient input that is ignored.
Interpretation of A. We remark that while the A parameter could also be selective, it ultimately aï¬ects the model only through its interaction with â via A = exp(âA) (the discretization (4)). Thus selectivity in â is enough to ensure selectivity in (A, B), and is the main source of improvement. We hypothesize that making A selective in addition to (or instead of) â would have similar performance, and leave it out for simplicity. | 2312.00752#40 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 41 | Interpretation of B and C. As discussed in Section 3.1, the most important property of selectivity is ï¬ltering out irrelevant information so that a sequence modelâs context can be compressed into an eï¬cient state. In an SSM, modifying B and C to be selective allows ï¬ner-grained control over whether to let an input ð¥ð¡ into the state âð¡ or the state into the output ð¦ð¡. These can be interpreted as allowing the model to modulate the recurrent dynamics based on content (input) and context (hidden states) respectively.
3.6 Additional Model Details Real vs. Complex. Most prior SSMs use complex numbers in their state â, which is necessary for strong performance on many tasks (Gu, Goel, and Ré 2022). However, it has been empirically observed that completely real-valued SSMs seem to work ï¬ne, and possibly even better, in some settings (Ma et al. 2023). We use real values as the default, which work well for all but one of our tasks; we hypothesize that the complex-real tradeoï¬ is related to the continuous-discrete spectrum in data modalities, where complex numbers are helpful for continuous modalities (e.g. audio, video) but not discrete (e.g. text, DNA).
9 | 2312.00752#41 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 42 | 9
Initialization. Most prior SSMs also suggest special initializations, particularly in the complex-valued case, which can help in several settings such as low-data regimes. Our default initialization for the complex case is S4D-Lin and for the real case is S4D-Real (Gu, Gupta, et al. 2022), which is based on the HIPPO theory (Gu, Dao, et al. 2020). These deï¬ne the ð-th element of A as â1â2 + ðð and â(ð + 1) respectively. However, we expect many initializations to work ï¬ne, particularly in the large-data and real-valued SSM regimes; some ablations are considered in Section 4.6. | 2312.00752#42 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 43 | Parameterization of â. We deï¬ned the selective adjustment to â as ð â(ð¥) = ð¡ðððºð½ð¼ðºððð·(ð«ððð¾ðºð1(ð¥)), which was motivated by the mechanics of â (Section 3.5). We observe that it can be generalized from dimension 1 to a larger dimension ð. We set this to be a small fraction of ð³, which uses a negligible number of parameters compared to the main Linear projections in the block. We additionally note that the broadcasting operation can instead be viewed as another Linear projection, initialized to a speciï¬c pattern of 1âs and 0âs; if this projection is trainable, this leads to the alternative ð â(ð¥) = ð«ððð¾ðºðð·(ð«ððð¾ðºðð
(ð¥)), which can be viewed as a low-rank projection. In our experiments, the | 2312.00752#43 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 45 | Remark 3.1. For brevity in our experimental results, we sometimes abbreviate selective SSMs as S6 models, because they are S4 models with a selection mechanism and computed with a scan.
# 4 Empirical Evaluation
In Section 4.1 we test Mambaâs ability to solve the two synthetic tasks motivated in Section 3.1. We then evaluate on three domains, each evaluated on autoregressive pretraining as well as downstream tasks.
Section 4.2: language model pretraining (scaling laws), and zero-shot downstream evaluation.
Section 4.3: DNA sequence pretraining, and ï¬ne-tuning on a long-sequence classiï¬cation task.
Section 4.4: audio waveform pretraining, and the quality of autoregressively generated speech clips.
Finally, Section 4.5 shows Mambaâs computational eï¬ciency at both training and inference time, and Section 4.6 ablates various components of the architecture and selective SSMs.
# 4.1 Synthetic Tasks
Full experiment details for these tasks including task details and training protocol are in Appendix E.1.
# 4.1.1 Selective Copying | 2312.00752#45 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 46 | # 4.1 Synthetic Tasks
Full experiment details for these tasks including task details and training protocol are in Appendix E.1.
# 4.1.1 Selective Copying
The Copying task is one of the most well-studied synthetic tasks for sequence modeling, originally designed to test the memorization abilities of recurrent models. As discussed in Section 3.1, LTI SSMs (linear recurrences and global convolutions) can easily solve this task by only keeping track of time instead of reasoning about the data; for example, by constructing a convolution kernel of exactly the right length (Figure 2). This was explicitly validated in earlier work on global convolutions (Romero et al. 2021). The Selective Copying task prevents this shortcut by randomizing the spacing between tokens. Note that this task has been introduced before as the Denoising task (Jing et al. 2019).
Note that many previous works argue that adding architecture gating (multiplicative interactions) can endow models with âdata-dependenceâ and solve related tasks (Dao, Fu, Saab, et al. 2023; Poli et al. 2023). However, we ï¬nd this explanation insuï¬cient intuitively because such gating does not interact along the sequence axis, and cannot aï¬ect the spacing between tokens. In particular architecture gating is not an instance of a selection mechanism (Appendix A). | 2312.00752#46 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 47 | Table 1 conï¬rms that gated architectures such as H3 and Mamba only partially improve performance, while the selection mechanism (modifying S4 to S6) easily solves this task, particularly when combined with these more powerful architectures.
10
Model Arch. Layer Acc. S4 - No gate No gate S4 S6 18.3 97.0 H3 Hyena - H3 H3 H3 S4 Hyena S6 57.0 30.1 99.7 - - Mamba Mamba Mamba Mamba Hyena S4 S6 56.4 28.4 99.8
Induction Heads Extrapolation
Extrapolation 1.05 ' ââ Mua-Absotute 08] ; ââ MHA-RoPE i =~ MHA-xPos 6) i â HB oa = byena ' Random 1 ran benath 0.0 , ; ; : , 10° 10° 108 10° 10° Test Sequence Length
> g 8
Table 1: (Selective Copying.) Accuracy for combinations of architectures and inner sequence layers.
Table 2: (Induction Heads.) Models are trained on sequence length 28 = 256, and tested on increasing sequence lengths of 26 = 64 up to 220 = 1048576. Full numbers in Table 11.
# 4.1.2 Induction Heads | 2312.00752#47 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 48 | # 4.1.2 Induction Heads
Induction heads (Olsson et al. 2022) is a simple task from the mechanistic interpretability lens (Elhage et al. 2021) that is surprisingly predictive of the in-context learning ability of LLMs. It requires models to perform associative recall and copy: for example, if the model has seen a bigram such as âHarry Potterâ in the sequence, then the next time âHarryâ appears in the same sequence, the model should be able to predict âPotterâ by copying from history.
Dataset. We train a 2-layer model on the induction heads task at sequence length 256, with a vocab size of 16, which is comparable to prior work on this task (Dao, Fu, Saab, et al. 2023) but with longer sequences. We additionally investigate generalization and extrapolation abilities by evaluating on a range of sequence lengths from 26 = 64 up to 220 = 1048576 at test time.
Models. Following established work on induction heads, we use 2 layer models, which allows attention to mechanistically solve the induction heads task (Olsson et al. 2022). We test both multi-head attention (8 heads, with various positional encodings) and SSM variants. We use a model dimension ð· of 64 for Mamba and 128 for the other models. | 2312.00752#48 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 49 | Results. Table 2 shows that Mambaâor more precisely, its selective SSM layerâhas the ability to solve the task perfectly because of its ability to selectively remember the relevant token while ignoring everything else in between. It generalizes perfectly to million-length sequences, or 4000Ã longer than it saw during training, while no other method goes beyond 2Ã.
Out of positional encoding variants for attention models, xPos (which was designed for length extrapolation) is slightly better than the others; also note that all attention models were only tested up to sequence length 214 = 16384 due to memory limitations. Out of other SSMs, H3 and Hyena are similar, contrary to the ï¬ndings in Poli et al. (2023).
# 4.2 Language Modeling
We evaluate the Mamba architecture on standard autoregressive language modeling against other architectures, on both pretraining metrics (perplexity) and zero-shot evaluations. We set the model sizes (depth and width) to mirror GPT3 speciï¬cations. We use the Pile dataset (L. Gao, Biderman, et al. 2020), and follow the training recipe described in Brown et al. (2020). All training details are in Appendix E.2.
# 4.2.1 Scaling Laws | 2312.00752#49 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 50 | # 4.2.1 Scaling Laws
For baselines, we compare against the standard Transformer architecture (GPT3 architecture), as well as the strongest Transformer recipe we know of (here referred to as Transformer++), based on the PaLM and LLaMa
11
Scaling Laws on The Pile (Sequence Length 2048) Scaling Laws on The Pile (Sequence Length 8192) 2x10" 2x10 Hyena Hyena RWKV s RWKV ââ Transformer Fy ââ Transformer fd RetNet 2 ââ RetNet 3+ 2 â HH wd â= Transformers |, | ââ Transformert+ ââ Mamba zg ââ Mamba 2 2 S a 6x 10° 1 7 6x 10° 1 7 10"? 102 10 107° FLOPs (log scale) FLOPs (log scale)
s 8 fd 2 2
> 3 2 2 S a
Figure 4: (Scaling Laws.) Models of size â 125ð to â 1.3ðµ parameters, trained on the Pile. Mamba scales better than all other attention-free models and is the first to match the performance of a very strong âTransformer++â recipe that has now become standard, particularly as the sequence length grows. | 2312.00752#50 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 51 | architectures (e.g. rotary embedding, SwiGLU MLP, RMSNorm instead of LayerNorm, no linear bias, and higher learning rates). We also compare against other recent subquadratic architectures (Figure 4). All model details are in Appendix E.2.
Figure 4 shows scaling laws under the standard Chinchilla (Hoï¬mann et al. 2022) protocol, on models from â 125ð to â 1.3ðµ parameters. Mamba is the ï¬rst attention-free model to match the performance of a very strong Transformer recipe (Transformer++) that has now become standard, particularly as the sequence length grows. We note that full results on context length 8k are missing for the RWKV and RetNet baselines, prior strong recurrent models that can also be interpreted as SSMs, due to a lack of eï¬cient implementation leading to out-of-memory or unrealistic computation requirements.
# 4.2.2 Downstream Evaluations | 2312.00752#51 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 52 | # 4.2.2 Downstream Evaluations
Table 3 shows the performance of Mamba on a range of popular downstream zero-shot evaluation tasks. We compare against the most well-known open source models at these sizes, most importantly Pythia (Biderman et al. 2023) and RWKV (B. Peng et al. 2023) which were trained with the same tokenizer, dataset, and training length (300B tokens) as our models. (Note that Mamba and Pythia are trained with context length 2048, while RWKV was trained with context length 1024.)
# 4.3 DNA Modeling
Motivated by the success of large language models, there has been recent exploration into using the foundation model paradigm for genomics. DNA has been likened to language in that it consists of sequences of discrete tokens with a ï¬nite vocab. It is also known for requiring long-range dependencies to model (Avsec et al. 2021). We investigate Mamba as a FM backbone for pretraining and ï¬ne-tuning in the same setting as recent works on long-sequence models for DNA (Nguyen, Poli, et al. 2023). In particular, we focus on two explorations of scaling laws across model size and sequence length (Figure 5), and a diï¬cult downstream synthetic classiï¬cation task requiring long context (Figure 6). | 2312.00752#52 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 53 | For pretraining, we largely follow a standard causal language modeling (next token prediction) setup for the training and model details (see also Appendix E.2). For the dataset, we largely follow the setup of HyenaDNA (Nguyen, Poli, et al. 2023), which uses the HG38 dataset for pretraining consisting of a single human genome with about 4.5 billion tokens (DNA base pairs) in the training split.
# 4.3.1 Scaling: Model Size
In this experiment, we investigate the scaling properties of genomics foundation models with various model backbones (Figure 5 Left).
Training. To advantage the baselines, we train on a short sequence length of 1024; as shown in Section 4.3.2, we expect results to favor Mamba even more at longer sequence lengths. We ï¬x a global batch size of 1024, for a
12
Table 3: (Zero-shot Evaluations.) Best results for each size in bold. We compare against open source LMs with various tokenizers, trained for up to 300B tokens. Pile refers to the validation split, comparing only against models trained on the same dataset and tokenizer (GPT-NeoX-20B). For each model size, Mamba is best-in-class on every single evaluation result, and generally matches baselines at twice the model size. | 2312.00752#53 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 54 | Model Token. Pile ppl â LAMBADA LAMBADA HellaSwag ppl â acc â acc â acc â acc â acc â acc â Hybrid H3-130M GPT2 â Pythia-160M Mamba-130M NeoX NeoX 29.64 10.56 89.48 38.10 16.07 25.77 33.0 44.3 31.7 30.2 35.3 64.2 61.4 64.5 44.4 43.2 48.0 24.2 24.1 24.3 50.6 51.9 51.9 40.1 40.6 44.7 Hybrid H3-360M GPT2 â Pythia-410M Mamba-370M NeoX NeoX 9.95 8.28 12.58 10.84 8.14 48.0 51.4 55.6 41.5 40.6 46.5 68.1 66.9 69.5 51.4 52.1 55.1 24.7 24.6 28.0 54.1 53.8 55.3 48.0 48.2 50.0 Pythia-1B Mamba-790M NeoX NeoX 7.82 7.33 7.92 6.02 56.1 62.7 47.2 55.1 70.7 72.1 57.0 | 2312.00752#54 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 55 | NeoX NeoX 7.82 7.33 7.92 6.02 56.1 62.7 47.2 55.1 70.7 72.1 57.0 61.2 27.1 29.5 53.5 56.1 51.9 57.1 GPT-Neo 1.3B Hybrid H3-1.3B OPT-1.3B Pythia-1.4B RWKV-1.5B Mamba-1.4B GPT2 â GPT2 â â OPT 7.51 NeoX 7.70 NeoX NeoX 6.80 7.50 11.25 6.64 6.08 7.04 5.04 57.2 49.6 58.0 61.7 56.4 64.9 48.9 52.6 53.7 52.1 52.5 59.1 71.1 71.3 72.4 71.0 72.4 74.2 56.2 59.2 56.7 60.5 60.5 65.5 25.9 28.1 29.6 28.5 29.4 32.8 54.9 56.9 59.5 57.2 54.6 61.5 52.4 53.0 55.0 55.2 54.3 59.7 GPT-Neo 2.7B Hybrid H3-2.7B | 2312.00752#55 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 56 | 61.5 52.4 53.0 55.0 55.2 54.3 59.7 GPT-Neo 2.7B Hybrid H3-2.7B OPT-2.7B Pythia-2.8B RWKV-3B Mamba-2.8B GPT2 â GPT2 â â OPT 6.73 NeoX 7.00 NeoX NeoX 6.22 5.63 7.92 5.12 5.04 5.24 4.23 62.2 55.7 63.6 64.7 63.9 69.2 55.8 59.7 60.6 59.3 59.6 66.1 72.1 73.3 74.8 74.0 73.7 75.2 61.1 65.6 60.8 64.1 67.8 69.7 30.2 32.3 31.3 32.9 33.1 36.3 57.6 61.4 61.0 59.7 59.6 63.5 56.5 58.0 58.7 59.1 59.6 63.3 GPT-J-6B OPT-6.7B Pythia-6.9B RWKV-7.4B GPT2 OPT NeoX NeoX â â 6.51 6.31 4.10 4.25 4.45 | 2312.00752#56 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 58 | total of 220 â 1ð tokens per batch. Models were trained for 10ð¾ gradient steps for a total of 10ðµ tokens.
Results. Figure 5 (Left) shows that Mambaâs pretraining perplexity improves smoothly with model size, and that Mamba scales better than both HyenaDNA and Transformer++. For example, at the largest model size of â 40ð parameters, the curve shows that Mamba can match the Transformer++ and HyenaDNA models with roughly 3Ã to 4Ã fewer parameters.
# 4.3.2 Scaling: Context Length
In the next DNA experiment, we investigate the scaling properties of models with respect to sequence length. We only compare the HyenaDNA and Mamba models, as quadratic attention becomes prohibitively expensive at longer sequence lengths. We pretrain models on sequence lengths 210 = 1024, 212 = 4096, 214 = 16384, 216 = 65536, 218 = 262144, 220 = 1048576. We ï¬x a model size of 6 layers by width 128 (about 1.3M-1.4M parameters). Models were trained for 20ð¾ gradient steps for a total of â 330ðµ tokens. The longer sequence lengths used sequence length warmup similar to (Nguyen, Poli, et al. 2023). | 2312.00752#58 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 59 | Results. Figure 5 (Right) shows that Mamba is able to make use of longer context even up to extremely long sequences of length 1M, and its pretraining perplexity improves as the context increases. On the other hand, the HyenaDNA model gets worse with sequence length. This is intuitive from the discussion in Section 3.5 on properties of the selection mechanism. In particular, LTI models cannot selectively ignore information; from a convolutional perspective, a very long convolution kernel is aggregating all information across a long sequence
13
Scaling Laws on the Human Genome (HG38) Scaling Laws - Sequence Length (HG38) ââ HyenaDNa 1.4m â= Mamba 1.4M ââ Mamba 7M ae ââ HyenaDNA 3.00 4 â Mamba ââ Transformert+ 2.98 | Perplexity Perplexity 2.80 4 284 2.754 274 r T r r r ; 10° 107 103 10 105 10° Parameters (log scale) Sequence Length | 2312.00752#59 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 60 | Figure 5: (DNA Scaling Laws.) Pretraining on the HG38 (human genome) dataset. (Left) Fixing short context length 210 = 1024 and increasing size from â 200ð¾ to â 40ð parameters, Mamba scales better than baselines. (Right) Fixing model size and increasing sequence lengths while keeping tokens/batch and total training tokens fixed. Unlike baselines, the selection mechanism of Mamba facilitates better performance with increasing context length.
Finetuning Accuracy (Species DNA Classification) 0.8] ââ HyenaDNA1.4M 0.7-| ââ Mamba 1.4m ââ Mamba 7M mag] ââ Random g 5 os 3 â8 oA 034 024 --------------------------------- T T T T 103 10¢ 108 10 Sequence Length
Scaling Laws - Sequence Length (YouTubeMix) 1.475 ââ SA+FEN 1.450 4 ââ Mamba @ 1.4254 2 1.400 4 5 o 1.375 4 © 1.3504 1.325 4 1.300 T T T 10* 10° 10 Sequence Length
Figure 6: (Great Apes DNA Classification.) Accuracy after fine-tuning on sequences of length 210 = 1024 up to 220 = 1048576 using pretrained models of the same context length. Nu- merical results in Table 13. | 2312.00752#60 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 61 | Figure 7: (Audio Pretraining.) Mamba improves performance over prior state-of-the-art (Sashimi) in autoregressive audio mod- eling, while improving up to minute-long context or million- length sequences (controlling for computation).
which may be very noisy. Note that while HyenaDNA claims to improve with longer context, their results do not control for computation time.
# 4.3.3 Synthetic Species Classification
We evaluate models on a downstream task of classifying between 5 diï¬erent species by randomly sampling a contigu- ous segment of their DNA. This task is adapted from HyenaDNA, which used the species {human, lemur, mouse, pig, hippo}. We modify the task to be signiï¬cantly more challenging by classifying between the ï¬ve great apes species {human, chimpanzee, gorilla, orangutan, bonobo}, which are known to share 99% of their DNA.
# 4.4 Audio Modeling and Generation
For the audio waveform modality, we compare primarily to the SaShiMi architecture and training protocols (Goel et al. 2022). This model comprises
1. a U-Net backbone with two stages of pooling by a factor ð that doubles the model dimension ð· per stage,
2. alternating S4 and MLP blocks in each stage. | 2312.00752#61 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 62 | 2. alternating S4 and MLP blocks in each stage.
We consider replacing the S4+MLP blocks with Mamba blocks. Experiment details are in Appendix E.4.
# 4.4.1 Long-Context Autoregressive Pretraining
We evaluate pretraining quality (autoregressive next-sample prediction) on YouTubeMix (DeepSound 2017), a standard piano music dataset used by prior work consisting of 4 hours of solo piano music, sampled at a rate of
14
16000 Hz Pretraining details largely follow the standard language modeling setup (Section 4.2). Figure 7 evaluates the eï¬ect of increasing training sequence lengths from 213 = 8192 to 220 â 106, while keeping computation ï¬xed. (There are some slight edge cases to the way the data is curated, which may lead to kinks in the scaling curves. For example, only minute-long clips were available so the maximum sequence length is actually bounded by 60ð â
16000ð»ð§ = 960000.)
Both Mamba and the SaShiMi (S4+MLP) baseline improve consistently with longer context lengths; Mamba is better throughout, and the gap widens at longer lengths. The main metric is bits per byte (BPB), which is a constant factor log(2) of the standard negative log-likelihood (NLL) loss for pretraining other modalities. | 2312.00752#62 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 63 | We note one important detail: this is the only experiment in this paper in which we switched from the real parameterization to complex (Section 3.6). We show additional ablations in Appendix E.4.
# 4.4.2 Autoregressive Speech Generation
SC09 is a benchmark speech generation dataset (Donahue, McAuley, and Puckette 2019; Warden 2018), consisting of 1-second clips sampled at 16000 Hz of the digits âzeroâ through ânineâ with highly variable characteristics. We largely follow the autoregressive training setup and generation protocol of Goel et al. (2022).
Table 4 shows automated metrics of the Mamba-UNet model compared to a variety of baselines from Goel et al. (2022): WaveNet (Oord et al. 2016), SampleRNN (Mehri et al. 2017), WaveGAN (Donahue, McAuley, and Puckette 2019), Diï¬Wave (Z. Kong et al. 2021), and SaShiMi. A small Mamba model outperforms the state-of-the-art (and much larger) GAN- and diï¬usion- based models. A larger model parameter-matched to the baselines further improves on ï¬delity metrics dramatically. | 2312.00752#63 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 64 | Table 5 takes the small Mamba model and investigates combinations of diï¬erent architectures for the outer stages and center stage. It shows that Mamba is consistently better than S4+MLP in the outer blocks, and Mamba > S4+MLP > MHA+MLP in the center blocks.
Table 4: (SC09) Automated metrics for unconditional generation on a challenging dataset of fixed-length speech clips. (Top to Bottom) Autoregressive baselines, non-autoregressive baselines, Mamba, and dataset metrics.
Table 5: (SC09 Model Ablations) Models with 6M parameters. In SaShiMiâs U-Net backbone, there are 8 center blocks operat- ing on sequence length 1000, sandwiched on each side by 8 outer blocks on sequence length 4000, sandwiched by 8 outer blocks on sequence length 16000 (40 blocks total). The architecture of the 8 center blocks are ablated independently of the rest. Note that Transformers (MHA+MLP) were not tested in the more im- portant outer blocks because of efficiency constraints. | 2312.00752#64 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 65 | Model Params NLL â FID â IS â mIS â AM â SampleRNN WaveNet SaShiMi 35.0M 4.2M 5.8M 2.042 1.925 1.873 8.96 5.08 1.99 1.71 2.27 5.13 3.02 5.80 42.57 1.76 1.47 0.74 WaveGAN DiffWave + SaShiMi Mamba Mamba Train Test 19.1M 24.1M 23.0M 6.1M 24.3M - - - - - 1.852 1.860 - - 2.03 1.92 1.42 0.94 0.67 0.00 0.02 4.90 5.26 5.94 6.26 7.33 8.56 8.33 36.10 51.21 69.17 88.54 144.9 292.5 257.6 0.80 0.68 0.59 0.52 0.36 0.16 0.19 | 2312.00752#65 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 67 | 4.5 Speed and Memory Benchmarks We benchmark the speed of the SSM scan operation (state expansion ð = 16), as well as the end-to-end inference throughput of Mamba, in Figure 8. Our eï¬cient SSM scan is faster than the best attention implementation that we know of (FlashAttention-2 (Dao 2023)) beyond sequence length 2K, and up to 20-40à faster than a standard scan implementation in PyTorch. Mamba achieves 4-5à higher inference throughput than a Transformer of similar size, since without the KV cache it can use much higher batch sizes. For example, a Mamba-6.9B (untrained) would have higher inference throughput than a 5à smaller Transformer-1.3B. Details in Appendix E.5, which additionally includes a benchmark of memory consumption.
15 | 2312.00752#67 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 68 | 15
Scan vs Convolution vs Attention time (A100 80GB PCle) Inference throughput on A100 80GB (prompt length 2048) â Flashattention-2 ame ee ES 1000-1 â convolution @ 1500] mm Mamba 6.98 wwe ââ Scan (PyTorch) Py mmm Transformer 6.78 100 4 ââ Scan (ours) Ei % 00M 2 a tod S 1000 B us Ff = 2 500 â = pad oid r S12 1k 2k «= 4k BKK 32K GK 128k 256K 512k 1 2 Hi A 16 32 oa 128 Sequence length Batch size
@ =
~ £
Figure 8: (Efficiency Benchmarks.) (Left) Training: our efficient scan is 40Ã faster than a standard implementation. (Right) Inference: as a recurrent model, Mamba can achieve 5Ã higher throughput than Transformers.
# 4.6 Model Ablations
We perform a series of detailed ablations on components of our model, focusing on the setting of language modeling with size â 350M models at Chinchilla token counts (same setting as Figure 4).
# 4.6.1 Architecture
Table 6 investigates the eï¬ects of the architecture (block) and its inner SSM layer (Figure 3). We ï¬nd that | 2312.00752#68 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 69 | # 4.6.1 Architecture
Table 6 investigates the eï¬ects of the architecture (block) and its inner SSM layer (Figure 3). We ï¬nd that
⢠Among previous non-selective (LTI) SSMs, which are equivalent to global convolutions, performance is very similar.
⢠Replacing the complex-valued S4 variant from previous work with a real-valued one does not aï¬ect performance much, suggesting that (at least for LM) real-valued SSMs may be a better choice when accounting for hardware eï¬ciency.
⢠Replacing any of these with a selective SSM (S6) signiï¬cantly improves performance, validating the motivation of Section 3.
⢠The Mamba architecture performs similarly to the H3 architecture (and seems slightly better when using a selective layer).
We also investigate interleaving the Mamba block with other blocks such as MLP (a traditional architecture) MHA (a hybrid attention architecture) in Appendix E.2.2.
# 4.6.2 Selective SSM
Table 7 ablates the selective SSM layer by considering diï¬erent combinations of selective â, B, and C param- eters (Algorithm 2), showing that â is the most important parameter due to its connection to RNN gating (Theorem 1). | 2312.00752#69 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 70 | Table 8 considers diï¬erent initializations of the SSM, which have been shown to make a large diï¬erence in some data modalities and settings (Gu, Goel, and Ré 2022; Gu, Gupta, et al. 2022). On language modeling, we ï¬nd that simpler real-valued diagonal initializations (S4D-Real, row 3) instead of more standard complex-valued parameterizations (S4D-Lin, row 1) perform better. Random initializations also work well, consistent with ï¬ndings from prior work (Mehta et al. 2023).
Table 9 and Table 10 consider varying the dimension of the â and (B, C) projections respectively. Changing them from static to selective provides the most beneï¬t, while increasing the dimensions further generally improves performance modestly with a small increase in parameter count.
Of particular note is the dramatic improvement of the selective SSM when the state size ð is increased, with over a 1.0 perplexity improvement for a cost of only 1% additional parameters. This validates our core motivation in Sections 3.1 and 3.3.
16 | 2312.00752#70 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 71 | 16
Table 6: (Ablations: Architecture and SSM layer.) The Mamba block performs similarly to H3 while being simpler. In the inner layer, there is little difference among different parameterizations of LTI models, while selective SSMs (S6) provide a large improvement. More specifically, the S4 (real) variant is S4D-Real and the S4 (complex) variant is S4D-Lin.
Model Arch. SSM Layer Perplexity Model Arch. SSM Layer Perplexity Hyena H3 H3 H3 H3 - H3 - Hyena S4 (complex) S4 (real) S6 10.24 10.30 10.34 8.95 Mamba Hyena - Mamba - - Mamba Mamba Mamba S4 (complex) S4 (real) S6 10.75 10.54 10.56 8.69
Table 7: (Ablations: Selective parameters.) â is the most im- portant parameter (Theorem 1), but using multiple selective pa- rameters together synergizes.
Table 8: (Ablations: Parameterization of A.) The more standard initializations based on S4D-Lin (Gu, Gupta, et al. 2022) perform worse than S4D-Real or a random initializa- tion, when the SSM is selective. | 2312.00752#71 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 72 | Selective A Selective B SelectiveC Perplexity \Qx& xX Qk *®QX Qk Q&X 1093 10.15 9.98 9.81 8.71
Að Initialization Að = â 1 Complex Real Að = â1â2 Að = â(ð + 1) Real Að â¼ exp(ð©(0, 1)) Real Field + ðð 2 9.16 8.85 8.71 8.71
Table 9: (Ablations: Expressivity of â.) The selection mechanism of â constructs it with a projection of the input. Project- ing it even to dim. 1 provides a large in- crease in performance; increasing it fur- ther provides further improvements at the cost of a modest increase in parameters. State size fixed to ð = 16.
Size of â proj. - 1 2 4 8 16 32 64 Params (M) 358.9 359.1 359.3 359.7 360.5 362.1 365.2 371.5 9.12 8.97 8.97 8.91 8.83 8.84 8.80 8.71
# Perplexity | 2312.00752#72 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 73 | # Perplexity
Table 10: (Ablations: SSM state dimension.) (Top) Constant B and C (Bottom) Selective B and C. Increasing the SSM state dimension ð, which can be viewed as an expansion factor on the dimension of the recurrent state, can significantly improve performance for a negligible cost in parameters/FLOPs, but only when B and C are also selective. Size of â projection fixed to 64.
State dimension ð Params (M) Perplexity 1 2 4 8 16 1 2 4 8 16 367.1 367.4 368.0 369.1 371.5 367.1 367.4 368.0 369.1 371.5 9.88 9.86 9.82 9.82 9.81 9.73 9.40 9.09 8.84 8.71
# 5 Discussion
We discuss related work, limitations, and some future directions.
Related Work. Appendix A discusses how the selection mechanism relates to similar concepts. Appendix B has an extended related work of SSMs and other related models. | 2312.00752#73 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 74 | Related Work. Appendix A discusses how the selection mechanism relates to similar concepts. Appendix B has an extended related work of SSMs and other related models.
No Free Lunch: Continuous-Discrete Spectrum. Structured SSMs were originally deï¬ned as discretizations of continuous systems (1), and have had a strong inductive bias toward continuous-time data modalities such as perceptual signals (e.g. audio, video). As discussed in Sections 3.1 and 3.5, the selection mechanism overcomes their weaknesses on discrete modalities such as text and DNA; but this conversely can impede their performance
17
on data that LTI SSMs excel on. Our ablations on audio waveforms examine this tradeoï¬ in more detail.
Downstream Affordances. Transformer-based foundation models (particularly LLMs) have a rich ecosystem of properties and modes of interaction with pretrained models, such as ï¬ne-tuning, adaptation, prompting, in-context learning, instruction tuning, RLHF, quantization, and so on. We are particularly interested in whether Transformer alternatives such as SSMs have similar properties and aï¬ordances. | 2312.00752#74 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 75 | Scaling. Our empirical evaluation is limited to small model sizes, below the threshold of most strong open source LLMs (e.g. Llama (Touvron et al. 2023)) as well as other recurrent models such as RWKV (B. Peng et al. 2023) and RetNet (Y. Sun et al. 2023), which have been evaluated at the 7B parameter scale and beyond. It remains to assess whether Mamba still compares favorably at these larger sizes. We also note that scaling SSMs may involve further engineering challenges and adjustments to the model that are not discussed in this paper.
# 6 Conclusion
We introduce a selection mechanism to structured state space models, allowing them to perform context-dependent reasoning while scaling linearly in sequence length. When incorporated into a simple attention-free architecture, Mamba achieves state-of-the-art results on a diverse set of domains, where it matches or exceeds the performance of strong Transformer models. We are excited about the broad applications of selective state space models to build foundation models for diï¬erent domains, especially in emerging modalities requiring long context such as genomics, audio, and video. Our results suggest that Mamba is a strong candidate to be a general sequence model backbone.
# Acknowledgments
We thank Karan Goel, Arjun Desai, and Kush Bhatia for helpful feedback on the draft.
# References | 2312.00752#75 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 76 | # Acknowledgments
We thank Karan Goel, Arjun Desai, and Kush Bhatia for helpful feedback on the draft.
# References
[1] Martin Arjovsky, Amar Shah, and Yoshua Bengio. âUnitary Evolution Recurrent Neural Networksâ. In: The
International Conference on Machine Learning (ICML). 2016, pp. 1120â1128. iga Avsec, Vikram Agarwal, Daniel Visentin, Joseph R Ledsam, Agnieszka Grabska-Barwinska, Kyle R Taylor, Yannis Assael, John Jumper, Pushmeet Kohli, and David R Kelley. âEffective Gene Expression Prediction from Sequence by Integrating Long-range Interactionsâ. In: Nature Methods 18.10 (2021), pp. 1196â1203. Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. âUsing Fast Weights to Attend to the Recent Pastâ. In: Advances in Neural Information Processing Systems (NeurIPS) 29 (2016). Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. âLayer Normalizationâ. In: arXiv preprint arXiv:1607.06450 (2016).
[2]
[3]
[4] | 2312.00752#76 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 77 | [2]
[3]
[4]
[5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. âNeural Machine Translation by Jointly Learning to Align and Translateâ. In: The International Conference on Learning Representations (ICLR). 2015.
[6] David Balduzzi and Muhammad Ghifary. âStrongly-typed Recurrent Neural Networksâ. In: International Con- ference on Machine Learning. PMLR. 2016, pp. 1292â1300.
[7] Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle OBrien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. âPythia: A Suite for Analyzing Large Language Models across Training and Scalingâ. In: The International Conference on Machine Learning (ICML). PMLR. 2023, pp. 2397â2430.
[8] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. âPIQA: Reasoning about Physical Commonsense in Natural Languageâ. In: Proceedings of the AAAI conference on Artificial Intelligence. Vol. 34. 05. 2020, pp. 7432â 7439. | 2312.00752#77 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 78 | [9] Guy E Blelloch. âPrefix Sums and Their Applicationsâ. In: (1990). [10]
James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. âQuasi-recurrent Neural Networksâ. In: arXiv preprint arXiv:1611.01576 (2016).
18
[11] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. âLanguage Models are Few-shot Learnersâ. In: Advances in Neural Information Processing Systems (NeurIPS) 33 (2020), pp. 1877â1901.
[12] Aydar Bulatov, Yuri Kuratov, and Mikhail S Burtsev. âScaling Transformer to 1M tokens and Beyond with RMTâ. In: arXiv preprint arXiv:2304.11062 (2023).
[13] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. âGenerating Long Sequences with Sparse Trans- formersâ. In: arXiv preprint arXiv:1904.10509 (2019). | 2312.00752#78 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 79 | [14] Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Pe- ter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. âRethinking Attention with Performersâ. In: The International Conference on Learning Representations (ICLR). 2021.
[15] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. âPaLM: Scaling Language Modeling with Pathwaysâ. In: Journal of Machine Learning Research 24.240 (2023), pp. 1â113. url: http://jmlr.org/ papers/v24/22-1144.html. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. âEmpirical Evaluation of Gated Re- current Neural Networks on Sequence Modelingâ. In: arXiv preprint arXiv:1412.3555 (2014). | 2312.00752#79 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 80 | [17] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. âThink you have Solved Question Answering? Try ARC, the AI2 Reasoning Challengeâ. In: arXiv preprint arXiv:1803.05457 (2018).
[18] Tri Dao. âFlashAttention-2: Faster Attention with Better Parallelism and Work Partitioningâ. In: (2023). [19] Tri Dao, Daniel Y Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. âFlashAttention: Fast and Memory- Efficient Exact Attention with IO-Awarenessâ. In: Advances in Neural Information Processing Systems (NeurIPS). 2022.
[20] Tri Dao, Daniel Y Fu, Khaled K Saab, Armin W Thomas, Atri Rudra, and Christopher Ré. âHungry Hungry Hippos: Towards Language Modeling with State Space Modelsâ. In: The International Conference on Learning Representations (ICLR). 2023. | 2312.00752#80 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 81 | [21] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. âLanguage Modeling with Gated Convolu- tional Networksâ. In: The International Conference on Machine Learning (ICML). PMLR. 2017, pp. 933â941.
# [22] DeepSound. SampleRNN. https://github.com/deepsound-project/samplernn-pytorch. 2017. [23]
Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, and Furu Wei. âLongNet: Scaling Transformers to 1,000,000,000 Tokensâ. In: arXiv preprint arXiv:2307.02486 (2023).
[24] Chris Donahue, Julian McAuley, and Miller Puckette. âAdversarial Audio Synthesisâ. In: The International Conference on Learning Representations (ICLR). 2019. | 2312.00752#81 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 83 | [26] Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. âA Mathematical Framework for Transformer Circuitsâ. In: Transformer Circuits Thread (2021). https://transformer-circuits.pub/2021/framework/index.html. [27] Mahan Fathi, Jonathan Pilault, Pierre-Luc Bacon, Christopher Pal, Orhan Firat, and Ross Goroshin. âBlockState Transformerâ. In: arXiv preprint arXiv:2306.09539 (2023). | 2312.00752#83 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 84 | [28] Yassir Fathullah, Chunyang Wu, Yuan Shangguan, Junteng Jia, Wenhan Xiong, Jay Mahadeokar, Chunxi Liu, Yangyang Shi, Ozlem Kalinli, Mike Seltzer, et al. âMulti-Head State Space Model for Sequence Modelingâ. In: INTERSPEECH. 2023.
[29] Karl J Friston, Lee Harrison, and Will Penny. âDynamic Causal Modellingâ. In: Neuroimage 19.4 (2003), pp. 1273â 1302.
[30] Daniel Y Fu, Elliot L Epstein, Eric Nguyen, Armin W Thomas, Michael Zhang, Tri Dao, Atri Rudra, and Christo- pher Ré. âSimple Hardware-efficient Long Convolutions for Sequence Modelingâ. In: The International Confer- ence on Machine Learning (ICML) (2023).
[31] Ken-ichi Funahashi and Yuichi Nakamura. âApproximation of Dynamical Systems by Continuous Time Recur- rent Neural Networksâ. In: Neural Networks 6.6 (1993), pp. 801â806.
19 | 2312.00752#84 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 85 | 19
[32] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. âThe Pile: An 800GB Dataset of Diverse Text for Language Modelingâ. In: arXiv preprint arXiv:2101.00027 (2020).
[33] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A Framework for Few-shot Language Model Evaluation. Version v0.0.1. Sept. 2021. doi: 10.5281/zenodo.5371628. url: https://doi.org/10.5281/zenodo.5371628.
[34] Karan Goel, Albert Gu, Chris Donahue, and Christopher Ré. âItâs Raw! Audio Generation with State-Space Modelsâ. In: The International Conference on Machine Learning (ICML). 2022. | 2312.00752#85 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 86 | [35] Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. âHIPPO: Recurrent Memory with Optimal Polynomial Projectionsâ. In: Advances in Neural Information Processing Systems (NeurIPS). 2020.
[36] Albert Gu, Karan Goel, and Christopher Ré. âEfficiently Modeling Long Sequences with Structured State Spacesâ. In: The International Conference on Learning Representations (ICLR). 2022.
[37] Albert Gu, Caglar Gulcehre, Tom Le Paine, Matt Hoffman, and Razvan Pascanu. âImproving the Gating Mech- anism of Recurrent Neural Networksâ. In: The International Conference on Machine Learning (ICML). 2020.
[38] Albert Gu, Ankit Gupta, Karan Goel, and Christopher Ré. âOn the Parameterization and Initialization of Diagonal State Space Modelsâ. In: Advances in Neural Information Processing Systems (NeurIPS). 2022. | 2312.00752#86 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 87 | [39] Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Ré. âCombining Recur- rent, Convolutional, and Continuous-time Models with the Linear State Space Layerâ. In: Advances in Neural Information Processing Systems (NeurIPS). 2021.
[40] Albert Gu, Isys Johnson, Aman Timalsina, Atri Rudra, and Christopher Ré. âHow to Train Your HIPPO: State Space Models with Generalized Basis Projectionsâ. In: The International Conference on Learning Representations (ICLR). 2023.
[41] Ankit Gupta, Albert Gu, and Jonathan Berant. âDiagonal State Spaces are as Effective as Structured State Spacesâ. In: Advances in Neural Information Processing Systems 35 (2022), pp. 22982â22994.
[42] David Ha, Andrew Dai, and Quoc V. Le. âHyperNetworksâ. In: The International Conference on Learning Rep- resentations (ICLR). 2017. | 2312.00752#87 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 88 | [43] Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. âDream to Control: Learning Behav- iors by Latent Imaginationâ. In: The International Conference on Learning Representations (ICLR). 2020. [44] Ramin Hasani, Mathias Lechner, Tsun-Hsuan Wang, Makram Chahine, Alexander Amini, and Daniela Rus. âLiquid Structural State-Space Modelsâ. In: The International Conference on Learning Representations (ICLR). 2023.
[45] Mikael Henaff, Arthur Szlam, and Yann LeCun. âRecurrent Orthogonal Networks and Long-Memory Tasksâ. In: The International Conference on Machine Learning (ICML). 2016.
[46] Dan Hendrycks and Kevin Gimpel. âGaussian Error Linear Units (GELUs)â. In: arXiv preprint arXiv:1606.08415 (2016).
[47] Sepp Hochreiter and Jürgen Schmidhuber. âLong Short-Term Memoryâ. In: Neural Computation 9.8 (1997), | 2312.00752#88 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |
2312.00752 | 89 | [47] Sepp Hochreiter and Jürgen Schmidhuber. âLong Short-Term Memoryâ. In: Neural Computation 9.8 (1997),
pp. 1735â1780. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. âAn Empirical Analysis of Compute- Optimal Large Language Model Trainingâ. In: Advances in Neural Information Processing Systems (NeurIPS) 35 (2022), pp. 30016â30030.
48
[49] Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc Le. âTransformer Quality in Linear Timeâ. In: The Interna- tional Conference on Machine Learning (ICML). PMLR. 2022, pp. 9099â9117.
[50] Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. âDeep Learning for Time Series Classification: A Reviewâ. In: Data Mining and Knowledge Discovery 33.4 (2019), pp. 917â963. | 2312.00752#89 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) have been developed to address Transformers' computational
inefficiency on long sequences, but they have not performed as well as
attention on important modalities such as language. We identify that a key
weakness of such models is their inability to perform content-based reasoning,
and make several improvements. First, simply letting the SSM parameters be
functions of the input addresses their weakness with discrete modalities,
allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though
this change prevents the use of efficient convolutions, we design a
hardware-aware parallel algorithm in recurrent mode. We integrate these
selective SSMs into a simplified end-to-end neural network architecture without
attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$
higher throughput than Transformers) and linear scaling in sequence length, and
its performance improves on real data up to million-length sequences. As a
general sequence model backbone, Mamba achieves state-of-the-art performance
across several modalities such as language, audio, and genomics. On language
modeling, our Mamba-3B model outperforms Transformers of the same size and
matches Transformers twice its size, both in pretraining and downstream
evaluation. | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | [
{
"id": "2302.13971"
},
{
"id": "2105.14103"
},
{
"id": "1803.05457"
},
{
"id": "2102.02611"
},
{
"id": "1607.06450"
},
{
"id": "2212.08136"
},
{
"id": "2210.10340"
},
{
"id": "2305.14952"
},
{
"id": "2307.08621"
},
{
"id": "1710.05941"
},
{
"id": "2305.13048"
},
{
"id": "1609.03499"
},
{
"id": "1606.08415"
},
{
"id": "1611.01576"
},
{
"id": "2307.02486"
},
{
"id": "2306.09539"
},
{
"id": "1904.10509"
},
{
"id": "2304.11062"
},
{
"id": "1709.02755"
},
{
"id": "2104.09864"
},
{
"id": "2101.00027"
},
{
"id": "2002.05202"
},
{
"id": "2308.03210"
}
] |