metadata
license: llama2
inference: false
base_model: meta-llama/Llama-2-13b-chat-hf
base_model_relation: quantized
tags:
- green
- llmware-chat
- p13
- ov
llama-2-13b-chat-ov
llama-2-13b-chat-ov is an OpenVino int4 quantized version of Llama-2-13B-Chat, providing a fast inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
llama-2-13b-chat is the official 13b chat finetuned version of Llama2, and is one of the classic and best all-around chat models from 2023.
Model Description
- Developed by: meta-llama
- Quantized by: llmware
- Model type: llama2
- Parameters: 13 billion
- Model Parent: meta-llama/Llama-2-13b-chat-hf
- Language(s) (NLP): English
- License: LLama2 Community License
- Uses: Chat and general purpose LLM
- RAG Benchmark Accuracy Score: NA
- Quantization: int4