Running 2.44k 2.44k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
LLM in a flash: Efficient Large Language Model Inference with Limited Memory Paper • 2312.11514 • Published Dec 12, 2023 • 258