latent-action-pretraining commited on
Commit
b81d616
1 Parent(s): e6292db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -19,7 +19,7 @@ base_model:
19
 
20
  <h1 align="center"> LAPA: Latent Action Pretraining from Videos</h1>
21
  <p align="center">
22
- <a href="">Hugging Face</a>&nbsp | &nbsp <a href="">Paper</a>&nbsp | &nbsp <a href="">Github</a> &nbsp
23
  <br>
24
 
25
  - LAPA is the **first unsupervised approach** for pretraining Vision-Language-Action (VLA) models without ground-truth robot action labels.
@@ -38,9 +38,9 @@ base_model:
38
  + **Vision Backbone**: VQGAN
39
  + **Language Model**: Llama-2
40
  - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/)
41
- - **Repository:**
42
- - **Paper:**
43
- - **Project Page & Videos:**
44
 
45
  ### Primary Use Cases
46
 
 
19
 
20
  <h1 align="center"> LAPA: Latent Action Pretraining from Videos</h1>
21
  <p align="center">
22
+ <a href="https://latentactionpretraining.github.io/">Website</a>&nbsp | &nbsp <a href="https://arxiv.org/abs/2410.11758">Paper</a>&nbsp | &nbsp <a href="https://github.com/LatentActionPretraining/LAPA">Github</a> &nbsp
23
  <br>
24
 
25
  - LAPA is the **first unsupervised approach** for pretraining Vision-Language-Action (VLA) models without ground-truth robot action labels.
 
38
  + **Vision Backbone**: VQGAN
39
  + **Language Model**: Llama-2
40
  - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/)
41
+ - **Website:** https://latentactionpretraining.github.io/
42
+ - **Paper:** https://arxiv.org/abs/2410.11758
43
+ - **Code:** https://github.com/LatentActionPretraining/LAPA
44
 
45
  ### Primary Use Cases
46