Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs Paper • 2406.16860 • Published Jun 24 • 59
Llama 3.1 GPTQ, AWQ, and BNB Quants Collection Optimised Quants for high-throughput deployments! Compatible with Transformers, TGI & VLLM 🤗 • 9 items • Updated Sep 26 • 56
Llama 3.1 Collection This collection hosts the transformers and original repos of the Llama 3.1, Llama Guard 3 and Prompt Guard models • 11 items • Updated 19 days ago • 636
Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model Paper • 2407.07053 • Published Jul 9 • 42
TaskBench: Benchmarking Large Language Models for Task Automation Paper • 2311.18760 • Published Nov 30, 2023 • 2
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow Paper • 2306.07209 • Published Jun 12, 2023 • 2
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn Paper • 2306.08640 • Published Jun 14, 2023 • 26
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace Paper • 2303.17580 • Published Mar 30, 2023 • 9