Update README.md
Browse files
README.md
CHANGED
|
@@ -95,7 +95,7 @@ VideoChat-Flash-7B is constructed upon UMT-L (300M) and Qwen2-7B, employing only
|
|
| 95 |
|
| 96 |
First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
|
| 97 |
```
|
| 98 |
-
pip install transformers==4.
|
| 99 |
pip install av
|
| 100 |
pip install imageio
|
| 101 |
pip install decord
|
|
|
|
| 95 |
|
| 96 |
First, you need to install [flash attention2](https://github.com/Dao-AILab/flash-attention) and some other modules. We provide a simple installation example below:
|
| 97 |
```
|
| 98 |
+
pip install transformers==4.40.1
|
| 99 |
pip install av
|
| 100 |
pip install imageio
|
| 101 |
pip install decord
|