Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
aws-neuron
/
optimum-neuron-cache
like
14
Follow
AWS Inferentia and Trainium
84
License:
apache-2.0
Model card
Files
Files and versions
Community
336
b0b2f23
optimum-neuron-cache
/
inference-cache-config
7 contributors
History:
61 commits
dacorvo
HF staff
Add DeepSeek distilled versions of LLama 8B
509e6bf
verified
16 days ago
gpt2.json
Safe
398 Bytes
Add more gpt2 configurations
10 months ago
granite.json
Safe
1.3 kB
Add configuration for granite models
about 2 months ago
llama-variants.json
Safe
1.45 kB
Add DeepSeek distilled versions of LLama 8B
16 days ago
llama.json
Safe
1.67 kB
Update inference-cache-config/llama.json
5 months ago
llama2-70b.json
Safe
287 Bytes
Create llama2-70b.json
8 months ago
llama3-70b.json
Safe
584 Bytes
Add DeepSeek distilled model
16 days ago
llama3.1-70b.json
Safe
289 Bytes
Rename inference-cache-config/Llama3.1-70b.json to inference-cache-config/llama3.1-70b.json
5 months ago
mistral-variants.json
Safe
1.04 kB
Remove obsolete mistral variants
5 months ago
mistral.json
Safe
1.87 kB
Update inference-cache-config/mistral.json
about 1 month ago
mixtral.json
Safe
583 Bytes
Update inference-cache-config/mixtral.json
5 months ago
qwen2.5-large.json
Safe
849 Bytes
Update inference-cache-config/qwen2.5-large.json
17 days ago
qwen2.5.json
Safe
2.69 kB
Add DeepSeek distilled models
17 days ago
stable-diffusion.json
Safe
1.91 kB
Update inference-cache-config/stable-diffusion.json
5 months ago