latent-action-pretraining commited on
Commit
679cf59
1 Parent(s): 7a937af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -30,6 +30,17 @@ base_model:
30
 
31
  ## Model Summary
32
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ### Primary Use Cases
35
 
 
30
 
31
  ## Model Summary
32
 
33
+ - **Developed by:** The LAPA team consisting of researchers from KAIST, UW, Microsoft, NVIDIA, and AI2.
34
+ - **Model type:** Vision-language-action (language, image => robot actions)
35
+ - **Language(s) (NLP):** en
36
+ - **License:** MIT
37
+ - **Finetuned from:** [`LWM-Chat-1M-Jax`](https://huggingface.co/LargeWorldModel/LWM-Chat-1M-Jax), a VLM trained from:
38
+ + **Vision Backbone**: VQGAN
39
+ + **Language Model**: Llama-2
40
+ - **Pretraining Dataset:** [Open X-Embodiment](https://robotics-transformer-x.github.io/)
41
+ - **Repository:**
42
+ - **Paper:**
43
+ - **Project Page & Videos:**
44
 
45
  ### Primary Use Cases
46