chiyuzhang commited on
Commit
b37a7f5
·
1 Parent(s): e6b00d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -17,8 +17,9 @@ tags:
17
  [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)]()
18
  [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)]()
19
 
 
20
  <p align="center" width="100%">
21
- <a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
22
  </p>
23
  Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
24
 
@@ -29,6 +30,6 @@ Illustration of our proposed InfoDCL framework. We exploit distant/surrogate lab
29
  ## Model Performance
30
 
31
  <p align="center" width="100%">
32
- <a><img src="https://raw.githubusercontent.com/UBC-NLP/infodcl/blob/master/images/main_table.png" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
33
  </p>
34
  Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs).
 
17
  [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)]()
18
  [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)]()
19
 
20
+
21
  <p align="center" width="100%">
22
+ <a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a>
23
  </p>
24
  Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
25
 
 
30
  ## Model Performance
31
 
32
  <p align="center" width="100%">
33
+ <a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
34
  </p>
35
  Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs).