VidEgoThink: Assessing Egocentric Video Understanding Capabilities for Embodied AI Paper • 2410.11623 • Published Oct 15, 2024 • 49
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs Paper • 2412.18925 • Published Dec 25, 2024 • 101
AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information? Paper • 2412.02611 • Published Dec 3, 2024 • 24
AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information? Paper • 2412.02611 • Published Dec 3, 2024 • 24
AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information? Paper • 2412.02611 • Published Dec 3, 2024 • 24
Roadmap towards Superhuman Speech Understanding using Large Language Models Paper • 2410.13268 • Published Oct 17, 2024 • 35
HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs Paper • 2311.09774 • Published Nov 16, 2023 • 1
Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People Paper • 2403.03640 • Published Mar 6, 2024 • 2
LLMs for Doctors: Leveraging Medical LLMs to Assist Doctors, Not Replace Them Paper • 2406.18034 • Published Jun 26, 2024
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture Paper • 2409.02889 • Published Sep 4, 2024 • 55
HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale Paper • 2406.19280 • Published Jun 27, 2024 • 65
SEED-Bench-2: Benchmarking Multimodal Large Language Models Paper • 2311.17092 • Published Nov 28, 2023