Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
ShamurangaiahΒ 
posted an update 30 days ago
Post
337
How do i access llama 3.1 70b in my space ?

this doesn't seem to work, can someone help me with a working code


from transformers import AutoConfig

config = AutoConfig.from_pretrained("meta-llama/Meta-Llama-3.1-70B", revision="main")
config.rope_scaling = {"type": "llama3", "factor": 8.0}

model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3.1-70B", config=config, use_auth_token=True)

https://huggingface.co/settings/tokens
First, create one read token and add it to the Secrets of your space as HF_TOKEN. If you add tokens to an environment variable or write it directly in your code, it will be exposed to the whole world.
Any name for the HF_TOKEN part.

import os
hf_token = os.getenv("HF_TOKEN")

from transformers import AutoConfig
config = AutoConfig.from_pretrained("meta-llama/Meta-Llama-3.1-70B", revision="main", token=hf_token)
config.rope_scaling = {"type": "llama3", "factor": 8.0}
model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3.1-70B", config=config, use_auth_token=True, token=hf_token)

I'm pretty sure this model was only for people on a Pro subscription, but if you're posting, you must be on a subscription, so I'm sure you'll be fine.

Β·

Yes, i do have pro subscription

can HF staff help ?