DevPanda004 commited on
Commit
5fcdbb7
1 Parent(s): 13d4b8e

Model save

Browse files
Files changed (1) hide show
  1. README.md +12 -17
README.md CHANGED
@@ -1,10 +1,8 @@
1
  ---
2
- base_model: facebook/musicgen-melody
3
  library_name: peft
4
  license: cc-by-nc-4.0
 
5
  tags:
6
- - text-to-audio
7
- - DevPanda004/demucsmetadata
8
  - generated_from_trainer
9
  model-index:
10
  - name: musicgen-melody-indian
@@ -16,10 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # musicgen-melody-indian
18
 
19
- This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the DEVPANDA004/DEMUCSMETADATA - DEFAULT dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 2.9599
22
- - Clap: 0.2567
23
 
24
  ## Model description
25
 
@@ -39,14 +34,14 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
- - train_batch_size: 2
43
- - eval_batch_size: 1
44
- - seed: 456
45
  - gradient_accumulation_steps: 8
46
- - total_train_batch_size: 16
47
  - optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
- - num_epochs: 2.0
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
@@ -55,8 +50,8 @@ The following hyperparameters were used during training:
55
 
56
  ### Framework versions
57
 
58
- - PEFT 0.13.2
59
- - Transformers 4.47.0.dev0
60
- - Pytorch 2.5.0+cu121
61
- - Datasets 3.0.2
62
- - Tokenizers 0.20.1
 
1
  ---
 
2
  library_name: peft
3
  license: cc-by-nc-4.0
4
+ base_model: facebook/musicgen-melody
5
  tags:
 
 
6
  - generated_from_trainer
7
  model-index:
8
  - name: musicgen-melody-indian
 
14
 
15
  # musicgen-melody-indian
16
 
17
+ This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on an unknown dataset.
 
 
 
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0002
37
+ - train_batch_size: 4
38
+ - eval_batch_size: 8
39
+ - seed: 42
40
  - gradient_accumulation_steps: 8
41
+ - total_train_batch_size: 32
42
  - optimizer: Use adamw_torch with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 2
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
 
50
 
51
  ### Framework versions
52
 
53
+ - PEFT 0.14.0
54
+ - Transformers 4.48.0.dev0
55
+ - Pytorch 2.1.2+cu121
56
+ - Datasets 3.2.0
57
+ - Tokenizers 0.21.0