reach-vb HF staff commited on
Commit
91fcea6
·
1 Parent(s): 59b84ef

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +6 -6
app.py CHANGED
@@ -31,6 +31,12 @@ def predict(audio_file_pth):
31
  title = "Music Spectrogram Diffusion: Multi-instrument Music Synthesis with Spectrogram Diffusion"
32
 
33
  description = """
 
 
 
 
 
 
34
  In this work, the authors focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime.
35
  This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments.
36
 
@@ -39,12 +45,6 @@ They use a simple two-stage process: MIDI to spectrograms with an encoder-decode
39
 
40
  examples = ["examples/beethoven_mond_2.mid", "examples/beethoven_hammerklavier_2.mid"]
41
 
42
- gr.HTML("""
43
- <p>For faster inference without waiting in the queue, you should duplicate this space and upgrade to GPU via the settings.
44
- <br/>
45
- <a href="https://huggingface.co/spaces/reach-vb/music-spectrogram-diffusion?duplicate=true">
46
- <img style="margin-top: 0em; margin-bottom: 0em" src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>
47
- </p>""")
48
 
49
  article = """
50
  <div style='margin:20px auto;'>
 
31
  title = "Music Spectrogram Diffusion: Multi-instrument Music Synthesis with Spectrogram Diffusion"
32
 
33
  description = """
34
+ <p>For faster inference without waiting in the queue, you should duplicate this space and upgrade to GPU via the settings.
35
+ <br/>
36
+ <a href="https://huggingface.co/spaces/reach-vb/music-spectrogram-diffusion?duplicate=true">
37
+ <img style="margin-top: 0em; margin-bottom: 0em" src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>
38
+ </p>
39
+
40
  In this work, the authors focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime.
41
  This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments.
42
 
 
45
 
46
  examples = ["examples/beethoven_mond_2.mid", "examples/beethoven_hammerklavier_2.mid"]
47
 
 
 
 
 
 
 
48
 
49
  article = """
50
  <div style='margin:20px auto;'>