kartiknarayan commited on
Commit
c590026
·
verified ·
1 Parent(s): 241fbeb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -3
README.md CHANGED
@@ -2,6 +2,68 @@
2
  license: mit
3
  language:
4
  - en
5
- tags:
6
- - code
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  language:
4
  - en
5
+ ---
6
+
7
+ # PETAL<i>face</i> Model Card
8
+
9
+ <div align="center">
10
+
11
+ [**Project Page**](https://kartik-3004.github.io/PETALface/) **|** [**Paper (ArXiv)**](https://kartik-3004.github.io/PETALface/) **|** [**Code**](https://github.com/Kartik-3004/PETALface)
12
+
13
+
14
+ </div>
15
+
16
+ ## Introduction
17
+
18
+ <div align="center">
19
+ <img src='assets/visual_abstract.png' height="50%" width="50%">
20
+ </div>
21
+
22
+ PETALface, is the first work which uses image-quality adaptive LoRA layers for low-resolution face recgonition. The main contributions of our work are:
23
+ 1. We introduce the use of the LoRA-based PETL technique to adapt large pre-trained face-recognition models to low-resolution datasets.
24
+ 2. We propose an image-quality-based weighting of LoRA modules to create separate proxy encoders for high-resolution and low-resolution data,
25
+ ensuring effective extraction of embeddings for face recognition.
26
+ 3. We demonstrate the superiority of PETAL\textit{face} in adapting to low-resolution datasets, outperforming other state-of-the-art models on
27
+ low-resolution benchmarks while maintaining performance on high-resolution and mixed-quality datasets.
28
+
29
+
30
+
31
+ ## Training Framework
32
+ <div align="center">
33
+ <img src='assets/petalface.png'>
34
+ </div>
35
+
36
+ Overview of the proposed PETALface approach: We include an additional trainable module in linear layers present in attention layers and the
37
+ final feature projection MLP. The trainable module is highlighted on the right. Specifically, we add two LoRA layers, where the weightage α is
38
+ decided based on the input-image quality, computed using an off-the-shelf image quality assessment network (IQA).
39
+
40
+
41
+
42
+ ## Usage
43
+
44
+ The pre-trained weights can be downloaded directly from this repository or using python:
45
+ ```python
46
+ from huggingface_hub import hf_hub_download
47
+
48
+ # Finetuned Weights
49
+
50
+ # The filename "swin_arcface_webface4m_tinyface" indicates that the model has a swin bakcbone and pretraind
51
+ # on webface4m dataset with arcface loss function and finetuned on tinyface.
52
+ hf_hub_download(repo_id="kartiknarayan/petalface", filename="swin_arcface_webface4m_tinyface/model.pt", local_dir="./weights")
53
+ hf_hub_download(repo_id="kartiknarayan/petalface", filename="swin_cosface_webface4m_tinyface/model.pt", local_dir="./weights")
54
+ hf_hub_download(repo_id="kartiknarayan/petalface", filename="swin_cosface_webface4m_briar/model.pt", local_dir="./weights")
55
+ hf_hub_download(repo_id="kartiknarayan/petalface", filename="swin_cosface_webface12m_briar/model.pt", local_dir="./weights")
56
+
57
+ # Pre-trained Weights
58
+ hf_hub_download(repo_id="kartiknarayan/petalface", filename="swin_arcface_webface4m/model.pt", local_dir="./weights")
59
+ hf_hub_download(repo_id="kartiknarayan/petalface", filename="swin_cosface_webface4m/model.pt", local_dir="./weights")
60
+ hf_hub_download(repo_id="kartiknarayan/petalface", filename="swin_arcface_webface12m/model.pt", local_dir="./weights")
61
+ hf_hub_download(repo_id="kartiknarayan/petalface", filename="swin_cosface_webface12m/model.pt", local_dir="./weights")
62
+ ```
63
+
64
+ ## Citation
65
+ ```bibtex
66
+ Coming Soon !
67
+ ```
68
+
69
+ Please check our [GitHub repository](https://kartik-3004.github.io/PETALface/) for complete instructions.