Update README.md
Browse files
README.md
CHANGED
@@ -7,9 +7,9 @@ tags:
|
|
7 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
8 |
should probably proofread and complete it, then remove this comment. -->
|
9 |
|
10 |
-
# smb-vision-
|
11 |
|
12 |
-
This model is trained from scratch using [VideoMAE](https://huggingface.co/docs/transformers/en/model_doc/videomae) on over
|
13 |
|
14 |
## Model description
|
15 |
|
@@ -29,31 +29,25 @@ More information needed
|
|
29 |
|
30 |
The following hyperparameters were used during training:
|
31 |
- learning_rate: 3e-04
|
32 |
-
- train_batch_size:
|
33 |
- eval_batch_size: 1
|
34 |
- seed: 42
|
35 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
36 |
- lr_scheduler_type: cosine
|
37 |
-
- num_epochs:
|
38 |
|
39 |
### Training results
|
40 |
|
41 |
{
|
42 |
-
"_runtime":
|
43 |
-
"_step":
|
44 |
-
"
|
45 |
-
"
|
46 |
-
"
|
47 |
-
"
|
48 |
-
"train/
|
49 |
-
"train/
|
50 |
-
"train/
|
51 |
-
"train/learning_rate": 0,
|
52 |
-
"train/loss": 0.5736,
|
53 |
-
"train_loss": 0.5022664608695041,
|
54 |
-
"train_runtime": 54785.1298,
|
55 |
-
"train_samples_per_second": 2.527,
|
56 |
-
"train_steps_per_second": 0.079
|
57 |
}
|
58 |
|
59 |
|
@@ -69,7 +63,7 @@ The following hyperparameters were used during training:
|
|
69 |
# load data using `dataload.py`
|
70 |
|
71 |
model = VideoMAEForPreTraining.from_pretrained(
|
72 |
-
standardmodelbio/smb-vision-
|
73 |
trust_remote_code=True,
|
74 |
)
|
75 |
|
|
|
7 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
8 |
should probably proofread and complete it, then remove this comment. -->
|
9 |
|
10 |
+
# smb-vision-large-1202
|
11 |
|
12 |
+
This model is trained from scratch using [VideoMAE](https://huggingface.co/docs/transformers/en/model_doc/videomae) on over 55k CT volumes.
|
13 |
|
14 |
## Model description
|
15 |
|
|
|
29 |
|
30 |
The following hyperparameters were used during training:
|
31 |
- learning_rate: 3e-04
|
32 |
+
- train_batch_size: 16
|
33 |
- eval_batch_size: 1
|
34 |
- seed: 42
|
35 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
36 |
- lr_scheduler_type: cosine
|
37 |
+
- num_epochs: 10.0
|
38 |
|
39 |
### Training results
|
40 |
|
41 |
{
|
42 |
+
"_runtime": 2641.091489502,
|
43 |
+
"_step": 399,
|
44 |
+
"_timestamp": 1733187755.3146417,
|
45 |
+
"_wandb.runtime": 2660,
|
46 |
+
"train/epoch": 8.425414364640885,
|
47 |
+
"train/global_step": 18300,
|
48 |
+
"train/grad_norm": 0.04110511764883995,
|
49 |
+
"train/learning_rate": 0.0001624558726951691,
|
50 |
+
"train/loss": 0.4292
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
}
|
52 |
|
53 |
|
|
|
63 |
# load data using `dataload.py`
|
64 |
|
65 |
model = VideoMAEForPreTraining.from_pretrained(
|
66 |
+
standardmodelbio/smb-vision-large,
|
67 |
trust_remote_code=True,
|
68 |
)
|
69 |
|