โ๏ธโ New Research Alert! โ๏ธ๐ ๐ Title: CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning
๐ Description: CoDA is a UDA methodology that boosts models to understand all adverse scenes (โ๏ธ,โ,โ๏ธ,๐) by highlighting the discrepancies within these scenes. CoDA achieves state-of-the-art performances on widely used benchmarks.
DeepLearning.AI just announced a new short course: Open Source Models with Hugging Face ๐ค, taught by Hugging Face's own Maria Khalusova, Marc Sun and Younes Belkada!
As many of you already know, Hugging Face has been a game changer by letting developers quickly grab any of hundreds of thousands of already-trained open source models to assemble into new applications. This course teaches you best practices for building this way, including how to search and choose among models.
You'll learn to use the Transformers library and walk through multiple models for text, audio, and image processing, including zero-shot image segmentation, zero-shot audio classification, and speech recognition. You'll also learn to use multimodal models for visual question answering, image search, and image captioning. Finally, youโll learn how to demo what you build locally, on the cloud, or via an API using Gradio and Hugging Face Spaces.
Thank you very much to Hugging Face's wonderful team for working with us on this.