ZYMPKU commited on
Commit
f5440de
1 Parent(s): 021709c
Files changed (1) hide show
  1. app.py +4 -4
app.py CHANGED
@@ -150,10 +150,10 @@ if __name__ == "__main__":
150
  <h1 style="font-weight: 600; font-size: 2rem; margin: 0rem">
151
  UDiffText: A Unified Framework for High-quality Text Synthesis in Arbitrary Images via Character-aware Diffusion Models
152
  </h1>
153
- <h3 style="font-weight: 450; font-size: 1rem; margin: 0rem">
154
- <a href='https://arxiv.org/pdf/******'><img src='https://img.shields.io/badge/Arxiv-******-DF826C'></a>
155
- <a href='https://github.com/ZYM-PKU/UDiffText'><img src='https://img.shields.io/badge/Code-UDiffText-D0F288'></a>
156
- <a href='https://udifftext.github.io'><img src='https://img.shields.io/badge/Project-UDiffText-8ADAB2'></a>
157
  </h3>
158
  <h2 style="text-align: left; font-weight: 450; font-size: 1rem; margin-top: 0.5rem; margin-bottom: 0.5rem">
159
  Our proposed UDiffText is capable of synthesizing accurate and harmonious text in either synthetic or real-word images, thus can be applied to tasks like scene text editing (a), arbitrary text generation (b) and accurate T2I generation (c)
 
150
  <h1 style="font-weight: 600; font-size: 2rem; margin: 0rem">
151
  UDiffText: A Unified Framework for High-quality Text Synthesis in Arbitrary Images via Character-aware Diffusion Models
152
  </h1>
153
+ <h3 style="font-weight: 450; font-size: 1rem; margin: 0rem; overflow: hidden;">
154
+ <a style="float: left" href='https://arxiv.org/pdf/******'><img src='https://img.shields.io/badge/Arxiv-******-DF826C'></a>
155
+ <a style="float: left" href='https://github.com/ZYM-PKU/UDiffText'><img src='https://img.shields.io/badge/Code-UDiffText-D0F288'></a>
156
+ <a style="float: left" href='https://udifftext.github.io'><img src='https://img.shields.io/badge/Project-UDiffText-8ADAB2'></a>
157
  </h3>
158
  <h2 style="text-align: left; font-weight: 450; font-size: 1rem; margin-top: 0.5rem; margin-bottom: 0.5rem">
159
  Our proposed UDiffText is capable of synthesizing accurate and harmonious text in either synthetic or real-word images, thus can be applied to tasks like scene text editing (a), arbitrary text generation (b) and accurate T2I generation (c)