carohiguera
commited on
Commit
•
460c7fc
1
Parent(s):
9b47b6e
added Sparsh MAE base
Browse files- README.md +47 -0
- mae_vitbase.ckpt +3 -0
- mae_vitbase.safetensors +3 -0
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
tags:
|
4 |
+
- sparsh
|
5 |
+
- mae
|
6 |
+
- base
|
7 |
+
- tactile
|
8 |
+
---
|
9 |
+
|
10 |
+
# Sparsh (base-sized model) trained using MAE
|
11 |
+
|
12 |
+
Sparsh is a Vision Transformer (ViT) model trained using the MAE method, specifically adapted for vision-based tactile sensors such as DIGIT and GelSight.
|
13 |
+
|
14 |
+
Disclaimer: This model card was written by the Sparsh authors. The ViT model and MAE objectives have been adapted for the tactile sensing use case.
|
15 |
+
|
16 |
+
## Model description
|
17 |
+
We introduce *Sparsh*, a family of touch representations trained using Self-Supervised Learning (SSL) across multiple sensors, including DIGIT, GelSight 2017 (with markers), and GelSight Mini (without markers). This model was trained using the MAE SSL approach.
|
18 |
+
|
19 |
+
The model takes two tactile images as input, with a temporal stride of 5 samples across the channel dimension, $I_t ⊕ I_{t−5} → x ∈ R^{h×w×6}$. For a sensor operating at 60FPS, this corresponds to an inference window of approximately 80ms, which is the reaction time humans need to adjust grip force when detecting partial slip.
|
20 |
+
|
21 |
+
We preprocess the tactile images by performing background subtraction, which allows for robustness to distractors such as shadows and light placement variations.
|
22 |
+
|
23 |
+
By pre-training the model via SSL, Sparsh learns representations for pairs of tactile images that can then be used to extract features useful for downstream tasks. To train a downstream task in a supervised fashion, you can place a standard decoder (or head) on top of the pre-trained Sparsh (encoder) by using attentive pooling followed by a shallow MLP.
|
24 |
+
|
25 |
+
|
26 |
+
## Intended uses & limitations
|
27 |
+
You can utilize the Sparsh model to extract touch representations for vision-based tactile sensors, including DIGIT, GelSight, and GelSight mini. You have two options:
|
28 |
+
|
29 |
+
1. Use the frozen Sparsh encoder: This allows you to leverage the pre-trained weights of the Sparsh model without modifying them.
|
30 |
+
2. Fine-tune the Sparsh encoder: You can fine-tune the Sparsh encoder along with the training of your downstream task, allowing the model to adapt to your specific use case.
|
31 |
+
|
32 |
+
Both options enable you to take advantage of the powerful touch representations learned by the Sparsh model.
|
33 |
+
|
34 |
+
## How to Use
|
35 |
+
For detailed instructions on how to load the encoder and integrate it into your downstream task, please refer to our [GitHub repository](https://github.com/facebookresearch/sparsh).
|
36 |
+
|
37 |
+
### BibTeX entry and citation info
|
38 |
+
```bibtex
|
39 |
+
@inproceedings{
|
40 |
+
higuera2024sparsh,
|
41 |
+
title={Sparsh: Self-supervised touch representations for vision-based tactile sensing},
|
42 |
+
author={Carolina Higuera and Akash Sharma and Chaithanya Krishna Bodduluri and Taosha Fan and Mrinal Kalakrishnan and Michael Kaess and Byron Boots and Mike Lambeta and Tingfan Wu and Mustafa Mukadam},
|
43 |
+
booktitle={8th Annual Conference on Robot Learning},
|
44 |
+
year={2024},
|
45 |
+
url={https://openreview.net/forum?id=xYJn2e1uu8}
|
46 |
+
}
|
47 |
+
```
|
mae_vitbase.ckpt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:83ae4a8fa6a7bbd9702b14586feafdba63252e83afad26c3e5832e0ad39446ad
|
3 |
+
size 1354221049
|
mae_vitbase.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:217053e6d84373fbd8ed255e73cd9713b3780fb2e9a5ad6e1f22f8b1cd1ec79b
|
3 |
+
size 345036880
|