Update README.md
Browse files
README.md
CHANGED
@@ -39,6 +39,8 @@ As part of our commitment to open science, we release **weights of 15 intermedia
|
|
39 |
|
40 |
StripedHyena is a deep signal processing, hybrid architecture composed of multi-head attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, improving over decoder-only Transformers.
|
41 |
|
|
|
|
|
42 |
Some highlights of the architecture:
|
43 |
- **Efficient autoregressive generation** via a recurrent mode (>500k generation with a single 80GB GPU)
|
44 |
- **Significantly faster training and finetuning** at long context (>3x at 131k)
|
@@ -54,10 +56,10 @@ Some highlights of the architecture:
|
|
54 |
One of the advantages of deep signal processing models is their flexibility. Different parametrizations of convolutions can be used depending on the memory, expressivity and causality requirements of pretraining, finetuning or inference workloads.
|
55 |
|
56 |
The main classes are:
|
57 |
-
- Modal: unconstrained poles ([reference](https://arxiv.org/pdf/2203.14343.pdf), [reference](https://arxiv.org/abs/2310.18780)), or constrained poles ([reference](https://arxiv.org/abs/2206.11893), [reference](https://arxiv.org/pdf/2303.06349.pdf))
|
58 |
-
- Canonical / Rational: TBA
|
59 |
- Hypernetworks: hypernetwork ([reference](https://arxiv.org/abs/2102.02611)), modulated hypernetwork ([reference](https://arxiv.org/abs/2302.10866)).
|
60 |
-
- Explicit: modulated explicit ([reference](https://arxiv.org/pdf/2210.09298.pdf))
|
61 |
|
62 |
StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training.
|
63 |
|
|
|
39 |
|
40 |
StripedHyena is a deep signal processing, hybrid architecture composed of multi-head attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, improving over decoder-only Transformers.
|
41 |
|
42 |
+
StripedHyena is designed to leverage the specialization of each of its layer classes, with Hyena layers implementing the bulk of the computation required for sequence processing and attention layers supplementing the ability to perform targeted pattern recall.
|
43 |
+
|
44 |
Some highlights of the architecture:
|
45 |
- **Efficient autoregressive generation** via a recurrent mode (>500k generation with a single 80GB GPU)
|
46 |
- **Significantly faster training and finetuning** at long context (>3x at 131k)
|
|
|
56 |
One of the advantages of deep signal processing models is their flexibility. Different parametrizations of convolutions can be used depending on the memory, expressivity and causality requirements of pretraining, finetuning or inference workloads.
|
57 |
|
58 |
The main classes are:
|
59 |
+
- Modal: unconstrained poles ([reference](https://arxiv.org/pdf/2203.14343.pdf), [reference](https://arxiv.org/abs/2310.18780)), or constrained poles ([reference](https://arxiv.org/abs/2206.11893), [reference](https://arxiv.org/pdf/2303.06349.pdf)).
|
60 |
+
- Canonical / Rational: TBA.
|
61 |
- Hypernetworks: hypernetwork ([reference](https://arxiv.org/abs/2102.02611)), modulated hypernetwork ([reference](https://arxiv.org/abs/2302.10866)).
|
62 |
+
- Explicit: modulated explicit ([reference](https://arxiv.org/pdf/2210.09298.pdf)).
|
63 |
|
64 |
StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training.
|
65 |
|