latent-action-pretraining commited on
Commit
d21d339
1 Parent(s): 1125554

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -13
README.md CHANGED
@@ -54,28 +54,37 @@ Our models are not specifically designed or evaluated for all downstream purpose
54
 
55
  ## Usage
56
 
57
- ### Fine-tuning
58
 
59
- Since the released checkpoint is trained with latent pretraining objective, **the outputs are not real actions that are executable in the real world**. To make the model output executable actions, fine-tuning on a small set of trajectories that contain ground-truth actions (~150 trajs) to map the latent action space to the actual action space.
60
 
61
- To finetune the model, run the following command:
62
  ```bash
63
- git clone ''
64
- pip install -r requirements.txt
65
- ./scripts/finetune.sh
 
 
 
 
 
 
 
66
  ```
67
 
68
- ### Latent Inference
69
 
70
- To analyze the output of the model, which is a sequence of latent actions (8^4), run the following command:
71
- To finetune the model, run the following command:
 
72
  ```bash
73
- git clone ''
74
- pip install -r requirements.txt
75
- ./scripts/inference.sh
76
  ```
77
 
78
-
 
 
 
 
79
 
80
  ## Benchmarks
81
 
 
54
 
55
  ## Usage
56
 
 
57
 
58
+ ### Latent Inference
59
 
60
+ To analyze the output of the model, which is a sequence of latent actions (8^4), run the following command:
61
  ```bash
62
+ conda create -n lapa python=3.10 -y
63
+ conda activate lapa
64
+ git clone https://github.com/LatentActionPretraining/LAPA.git
65
+ pip install -r requirements.txt
66
+ mkdir lapa_checkpoints && cd lapa_checkpoints
67
+ wget https://huggingface.co/latent-action-pretraining/LAPA-7B-openx/resolve/main/tokenizer.model
68
+ wget https://huggingface.co/latent-action-pretraining/LAPA-7B-openx/resolve/main/vqgan
69
+ wget https://huggingface.co/latent-action-pretraining/LAPA-7B-openx/resolve/main/params
70
+ cd ..
71
+ python -m latent_pretraining.inference
72
  ```
73
 
74
+ ### Fine-tuning
75
 
76
+ Since the released checkpoint is trained with latent pretraining objective, **the outputs are not real actions that are executable in the real world**. To make the model output executable actions, fine-tuning on a small set of trajectories that contain ground-truth actions (~150 trajs) to map the latent action space to the actual action space.
77
+
78
+ To finetune the model on SIMPLER, run the following command:
79
  ```bash
80
+ ./scripts/finetune_simpler.sh
 
 
81
  ```
82
 
83
+ To finetune the model on a custom dataset, run the following command:
84
+ ```bash
85
+ python data/finetune_preprocess.py --input_path "/path_to_json_file" --output_filename "data/real_finetune.jsonl" --csv_filename "data/real_finetune.csv"
86
+ ./scripts/finetune_real.sh
87
+ ```
88
 
89
  ## Benchmarks
90