Llama 2 7B Physics
A large language model specialized for quantum physics related queries. It has been fine tuned from llama 2 7B which is a chat model. The model was fine-tuned using the unsloth library in python.
Usage
You can import and use the model using unsloth:
from unsloth import FastLanguageModel
max_seq_length = 2048
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "arham-15/llama2_7B_qphysics",
max_seq_length = max_seq_length,
dtype = None,
load_in_4bit = True,
)
Or you can use the hugging face transformers library if you wish to, totally up to you.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "arham-15/llama2_7B_qphysics"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Results
The model has been evaluated with its base model by perplexity score. The model has shown significant improvement on quantum physics related queries. Out of 200 test questions, the model outperformed the base model on 126 with a lower perplexity score.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for arham-15/llama2_7B_qphysics
Base model
meta-llama/Llama-2-7b-chat-hf