This repo contains the trained 1.3 billion parameter LLAMA-2 architecture model checkpoints for the work Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support