davidpiscasio commited on
Commit
7ce4fa4
·
1 Parent(s): 736ecef

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -70,5 +70,5 @@ gr.Interface(fn=unpaired_img2img,
70
  ['Image to Van Gogh', "examples/img2.jpg"],
71
  ['Image to Monet', "examples/img1.jpg"]],
72
  description="<p align='justify'>This is an implementation of the unpaired image to image translation using a pretrained CycleGAN model. To use the app, kindly select first the type of translation you wish to perform among the choices in the dropdown menu. Then, upload the image you wish to translate and click on the 'Submit' button.</p>",
73
- article="<p align='justify'>The model architecture used in this space is the Cycle-Consistent Adversarial Network, commonly referred to as CycleGAN. CycleGAN aims to perform translation of images between two domains without the need for expensive and difficult-to-acquire paired data for training. The architecture consists of two generators, one generates an image from X to Y while the other generates an image from Y back to X. These two generators are also paired with a discriminator each that aims to discriminate generated images from real images, thus improving model performance. All credits go to Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros from the Berkeley AI Research (BAIR) laboratory at UC Berkeley for the creation of CycleGAN. To know more about Unpaired Image to Image Translation and CycleGAN, you may access their <a href = https://paperswithcode.com/paper/unpaired-image-to-image-translation-using>Papers with Code</a> page and their <a href = https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix>GitHub</a> repository.</p>",
74
  allow_flagging="never").launch(inbrowser=True)
 
70
  ['Image to Van Gogh', "examples/img2.jpg"],
71
  ['Image to Monet', "examples/img1.jpg"]],
72
  description="<p align='justify'>This is an implementation of the unpaired image to image translation using a pretrained CycleGAN model. To use the app, kindly select first the type of translation you wish to perform among the choices in the dropdown menu. Then, upload the image you wish to translate and click on the 'Submit' button.</p>",
73
+ article="<p align='justify'>The model architecture used in this space is the Cycle-Consistent Adversarial Network, commonly referred to as CycleGAN. CycleGAN aims to perform translation of images between two domains without the need for expensive and difficult-to-acquire paired training data. The architecture consists of two generators, one generates an image from domain X to domain Y while the other generates an image from domain Y back to domain X. These two generators are also paired with a discriminator each that aims to discriminate generated images from real images, thus improving model performance. All credits go to Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros from the Berkeley AI Research (BAIR) laboratory at UC Berkeley for the creation of CycleGAN. To know more about Unpaired Image to Image Translation and CycleGAN, you may access their <a href = https://paperswithcode.com/paper/unpaired-image-to-image-translation-using>Papers with Code</a> page and their <a href = https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix>GitHub</a> repository.</p>",
74
  allow_flagging="never").launch(inbrowser=True)