File size: 3,993 Bytes
b0fcd94
 
 
 
6e6d214
 
9d7c1ad
 
ab3f8be
 
 
6e6d214
32703ad
6e6d214
9d7c1ad
4a49b90
 
 
9d7c1ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9cf7c30
9d7c1ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9cf7c30
 
9d7c1ad
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: apache-2.0
library_name: transformers
---

# recursal/QRWKV6-32B-Instruct-Preview-v0.1
  
> Compute sponsored by [TensorWave](https://tensorwave.com) - Access MI300X today!

Try out this model on:  
[![Featherless](https://img.shields.io/badge/recursal%2FQRWKV6--32B--Instruct--Preview--v0.1-Dummy?style=flat&label=Featherless&color=facc15)](https://featherless.ai/models/recursal/QRWKV6-32B-Instruct-Preview-v0.1)

![purple-finch.png](./purple-finch.png)

QRWKV6 32B Instruct Preview is one of the largest and strongest RWKV model to date.

More details can be found at:
https://substack.recursal.ai/p/q-rwkv-6-32b-instruct-preview

## Evaluation

The follow demostrates that QRWKV6 performs either similarly or ahead of it's base model: *Qwen2.5-32B-Instruct*

| Model | MMLU | arc_challenge | arc_easy | hellaSwag | lambada_openai | piqa | sciq | winogrande |
|---|---|---|---|---|---|---|---|---|
| QRWKV6-32B-Instruct | 76.63% | 60.92% | 83.00% | 83.03% | 74.17% | 82.92% | 95.40% | 78.22% |
| Qwen2.5-32B-Instruct | 81.77% | 58.70% | 77.06% | 85.19% | 75.22% | 81.23% | 95.00% | 72.85% |
| RWKV-EagleX-7B-v2 | 43.84% | 41.55% | 74.45% | 56.00% | 75.02% | 77.64% | 93.00% | 73.32% |
| Falcon-Mamba-7B | 59.72% | 59.13% | 81.73% | 80.21% | 68.89% | 82.15% | 93.60% | 74.35% |
| Llama-3.1-8B-Instruct | 68.11% | 55.29% | 79.71% | 79.27% | 73.12% | 80.85% | 96.20% | 74.19% |
| Llama-3.1-70B-Instruct | 82.40% | 63.40% | 83.50% | 84.62% | 75.68% | 83.79% | 97.10% | 79.08% |

## Model Notes

Linear models offer a promising approach to significantly reduce computational costs at scale, particularly for large context lengths. This enables a more than 1000x improvement in inference cost efficiency, enabling both O1-style inference time thinking and wider AI accessibility.

We are able to convert any previously trained QKV Attention-based model, such as Qwen and LLaMA, into an RWKV variant without **requiring retraining from scratch**. Enabling us to rapidly test and validate the significantly more efficient RWKV Linear attention mechanism at a larger scale with a much smaller budget, bypassing the need for training from scratch.

This approach demonstrates the architecture design and scalability of RWKV, reinforcing the idea that QKV attention is not the sole essential component.

One downside to this technique is that the model's inherent knowledge and dataset training are inherited from its "parent" model. Consequently, unlike previous RWKV models trained on over 100+ languages, the QRWKV model is limited to approximately 30 languages supported by the Qwen line of models.

Due to the the lack of RWKV-based channel mix and feedforward layers, separate inference code is needed for this specific model.

Furthermore, due to compute constraints, we were only able to train up to 16K token context length. While the model is stable beyond this limit, additional training might be required to support longer context lengths.

## Future Models

We are currently training Q-RWKV-6 72B. Additionally, with the finalization of RWKV-7 we plan to apply the same process and provide a full line up of:

- Q-RWKV-7 32B
- LLaMA-RWKV-7 70B

Lastly, we intend to provide details on the conversion along with our paper after the model release. 

## Links
- [Our wiki](https://wiki.rwkv.com)
- [TensorWave - The AMD Cloud](https://tensorwave.com) - Access MI300X today!
- [Recursal.AI Cloud Platform](https://platform.recursal.ai)
- [Featherless Inference](https://featherless.ai/model-families/rwkv6/)

## Acknowledgement
We are grateful for the help and support from the following key groups:

- [TensorWave](https://tensorwave.com) - Sponsored the compute needed to train this model. It wouldn't be possible without them.
- EleutherAI for their support, especially in the v5/v6 Eagle/Finch paper.
- Linux Foundation AI & Data group for supporting and hosting the RWKV project.
- Recursal for their support in building and managing the training of this model architecture.