D0k-tor commited on
Commit
fcd7ed3
ยท
1 Parent(s): c8f3b13

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -5
app.py CHANGED
@@ -37,16 +37,13 @@ with gr.Blocks() as demo:
37
  gr.HTML(
38
  """
39
  <div style="text-align: center; max-width: 1200px; margin: 20px auto;">
40
- <h1 style="font-weight: 900; font-size: 3rem; margin: 0rem">
41
  ๐Ÿ“ธ ViT Image-to-Text with LORA ๐Ÿ“
42
- </h1>
43
  <h2 style="text-align: left; font-weight: 450; font-size: 1rem; margin-top: 2rem; margin-bottom: 1.5rem">
44
  In the field of large language models, the challenge of fine-tuning has long perplexed researchers. Microsoft, however, has unveiled an innovative solution called <b>Low-Rank Adaptation (LoRA)</b>. With the emergence of behemoth models like GPT-3 boasting billions of parameters, the cost of fine-tuning them for specific tasks or domains has become exorbitant.
45
  <br>
46
  <br>
47
- LoRA offers a groundbreaking approach by freezing the weights of pre-trained models and introducing trainable layers known as <b>rank-decomposition matrices in each transformer block</b>. This ingenious technique significantly reduces the number of trainable parameters and minimizes GPU memory requirements, as gradients no longer need to be computed for the majority of model weights.
48
- <br>
49
- <br>
50
  You can find more info here: <u><a href="https://www.linkedin.com/pulse/fine-tuning-image-to-text-algorithms-with-lora-daniel-puente-viejo" target="_blank">Linkedin article</a></u>
51
  </h2>
52
  </div>
 
37
  gr.HTML(
38
  """
39
  <div style="text-align: center; max-width: 1200px; margin: 20px auto;">
40
+ <h2 style="font-weight: 900; font-size: 3rem; margin: 0rem">
41
  ๐Ÿ“ธ ViT Image-to-Text with LORA ๐Ÿ“
42
+ </h2>
43
  <h2 style="text-align: left; font-weight: 450; font-size: 1rem; margin-top: 2rem; margin-bottom: 1.5rem">
44
  In the field of large language models, the challenge of fine-tuning has long perplexed researchers. Microsoft, however, has unveiled an innovative solution called <b>Low-Rank Adaptation (LoRA)</b>. With the emergence of behemoth models like GPT-3 boasting billions of parameters, the cost of fine-tuning them for specific tasks or domains has become exorbitant.
45
  <br>
46
  <br>
 
 
 
47
  You can find more info here: <u><a href="https://www.linkedin.com/pulse/fine-tuning-image-to-text-algorithms-with-lora-daniel-puente-viejo" target="_blank">Linkedin article</a></u>
48
  </h2>
49
  </div>