Run the Model

tokenizer = LlamaTokenizer.from_pretrained("alexpaul/QI-large-v1")

base_model = LlamaForCausalLM.from_pretrained(
    "alexpaul/QI-large-v1",
    load_in_8bit=True,
    device_map='auto',
)
Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Datasets used to train alexpaul/QI-Large-v1

Space using alexpaul/QI-Large-v1 1