File size: 1,088 Bytes
91d3a51 15af4e5 2c60e8f 15af4e5 91d3a51 2c60e8f f20f739 15af4e5 8815da2 f20f739 8815da2 15af4e5 e35b056 5de83eb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: apache-2.0
pipeline_tag: time-series-forecasting
library_name: grok
tags:
- grok-3
- not-for-all-audiences
datasets:
- HuggingFaceFW/fineweb-2
new_version: Qwen/QwQ-32B-Preview
---
# Grok-3
This repository contains the weights of the Grok-3 open-weights model. You can find the code in the [GitHub Repository](https://github.com/xai-org/grok-3/tree/main).
# Download instruction
Clone the repo & download the `int8` checkpoint to the `checkpoints` directory by executing this command in the repo root directory:
```shell
git clone https://github.com/xai-org/grok-3.git && cd grok-1
pip install huggingface_hub[hf_transfer]
huggingface-cli download xai-org/grok-1 --repo-type model --include ckpt-0/* --local-dir checkpoints --local-dir-use-symlinks False
```
Then, you can run:
```shell
pip install -r requirements.txt
python run.py
```
You should be seeing output from the language model.
Due to the large size of the model (314B parameters), a multi-GPU machine is required to test the model with the example code.
p.s. we're hiring: https://x.ai/careers |