Collection of pre-trained encoders from the MIDL 2025 submission "Unified 3D MRI Representations via Sequence-Invariant Contrastive Learning"
Liam Chalcroft
liamchalcroft
Β·
AI & ML interests
Medical imaging, 3D vision
Recent Activity
reacted
to
ahmed-masry's
post
with π
about 8 hours ago
Happy to announce AlignVLM π β a novel approach to bridging vision and language latent spaces for multimodal understanding in Vision-Language Models (VLMs) πππΌ
π Read the paper: https://huggingface.co/papers/2502.01341
π§ Whatβs the challenge?
Aligning visual features with language embeddings remains a major bottleneck in VLMs. Existing connectors such as Multi-layer perceptron (MLPs) often introduce noise that degrades performance. β
π― Our Solution: ALIGN Connector
We propose AlignVLM, a method that maps vision features into a weighted average of LLM text embeddings, ensuring they remain in a space that the LLM can effectively interpret. β
π¬ How does it perform?
We compared ALIGN against common connectors like MLPs, Perceiver Resampler, and Ovis trained under similar configurations. The results? ALIGN outperforms them all π on diverse document understanding tasks π.
π Meet the AlignVLM Model Family!
We trained Llama 3.1 (1B, 3B, 8B) using our connector and benchmarked them against various models. The results:
β
AlignVLM surpasses all Base VLMs trained under similar configurations. β
Our models also perform competitively against Instruct VLMs such as Qwen2-VL and InternVL-2.5 π.
π€ What about robustness to noise?
We injected Gaussian noise (ΞΌ=0, Ο=3) into the vision encoderβs outputs before feeding them to the connector:
β
ALIGN Connector: Minimal drop (β1.67%) β proving its high robustness!
β MLP Connector: Severe degradation (β25.54%) β struggling with noisy inputs.
Code & model weights coming soon! Stay tuned! π₯
liked
a model
about 9 hours ago
MarcusLoren/MeshGPT-preview
liked
a model
5 days ago
Luffy503/VoCo
Organizations
Collections
1
models
3
datasets
None public yet