TokenButler

The collection of TokenButler models can be found here. To run the meta-llama/Llama-3.2-3B
model, follow:
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
question = "If millionaires have butlers, why don't million dollar language models have a butler too? I think its because "
model_name = "akhauriyash/Llama-3.2-3B-Butler"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
response = generator(question, max_new_tokens=200, do_sample=True, top_p=0.95, temperature=0.7)
print(response[0]['generated_text'][len(question):])
Note that the 'default' configured sparsity is 50%. Further, there is a 'sliding window' of 128 and 8 'anchor tokens'. To 'change' the sparsity, you can use the following function after loading the model. Please note that the 'fixed' is the only supported strategy at the moment, which 'fixes' the sparsity of each layer (except the first) at the 'pc' (percentage) mentioned. This can also be found at test_hf.py
. Sliding window and anchor tokens can be changed in a similar manner.
def set_sparsity(model, sparsity):
for module in model.modules():
if module.__class__.__name__.__contains__("AttentionExperimental"):
module.token_sparse_method = sparsity
module.set_token_sparsity()
return model
model = set_sparsity(model, "fixed_60pc")
Predictor Architecture

Custom Synthetic Task

All of our results, traces from experiments are located in ablation_results/
Note: Our predictor design has improved since the arXiv paper release (We added a layer-norm to stabilize training). Further, to focus on the main predictor design and training-eval scripts, we have removed the ablation scripts. To reproduce the original results and predictor models, please checkout commit 0412fc24a3b770e4d82e6d7064a8172f24c5fcd3
and download the old models from Drive Link.
For the latest, new models, try the huggingface integration. Wandb-Logs for trained models.
- Downloads last month
- 10