Running MCP 32 32 Serverless TextGen Hub ♨ Now supports inference providers and multimodal. No GPU req.