Model Information
Fireball-R1.1-LLama-3.1-8B
This is a state-of-the-art language model optimized for neutrality, STEM proficiency, and ethical alignment. Fine-tuned Deepseek-R1-distill-llama-8b-unsloth-bnb-4bit for science, chemistry, and mathematics with reduced cultural/political bias. This large language model is open source.
Table of Contents
Features
- Neutral Worldview: Minimizes political/cultural bias via globally diverse training data and human feedback.
- STEM Specialization: Enhanced performance in:
- Chemistry: Reaction mechanisms, periodic trends, spectroscopy.
- Mathematics: Equation solving, proofs, calculus.
- General Science: Hypothesis generation, research summarization.
- Ethical Guardrails: Filters sensitive content and flags uncertain outputs.
Installation
pip install transformers torch
pip install accelerate
pip install -U transformers
Basic Inference
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EpistemeAI/Fireball-R1.1-Llama-3.1-8B")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Fireball-R1.1-Llama-3.1-8B")
prompt = "Calculate the molar mass of sulfuric acid (H₂SO₄)."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
##advance inference 8bit
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("EpistemeAI/Fireball-R1.1-Llama-3.1-8B")
# Load the model in 8-bit precision using bitsandbytes (requires a CUDA GPU)
model = AutoModelForCausalLM.from_pretrained(
"EpistemeAI/Fireball-R1.1-Llama-3.1-8B",
load_in_8bit=True, # Enable 8-bit loading to reduce memory usage
device_map="auto" # Automatically map model layers to the available device(s)
)
# Define the system prompt and the user prompt
system_prompt = "You are a highly knowledgeable assistant with expertise in chemistry and physics. <think>"
user_prompt = "Calculate the molar mass of sulfuric acid (H₂SO₄)."
# Combine the system prompt with the user prompt. The format here follows a common convention for chat-like interactions.
full_prompt = f"System: {system_prompt}\nUser: {user_prompt}\nAssistant:"
# Tokenize the combined prompt and move the inputs to the GPU
inputs = tokenizer(full_prompt, return_tensors="pt").to("cuda")
# Generate output text from the model
outputs = model.generate(**inputs, max_length=12200)
# Decode and print the result, skipping special tokens
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
## inference with 4bit:
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
import torch
# Define the quantization configuration for 4-bit mode.
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4", # Alternative: "fp4" (choose based on your model)
bnb_4bit_compute_dtype=torch.float16, # Use FP16 for computations
bnb_4bit_use_double_quant=True, # Optionally enable double quantization for better accuracy
)
# Load the tokenizer.
tokenizer = AutoTokenizer.from_pretrained("EpistemeAI/Fireball-R1.1-Llama-3.1-8B")
# Load the model with the 4-bit quantization configuration.
model = AutoModelForCausalLM.from_pretrained(
"EpistemeAI/Fireball-R1.1-Llama-3.1-8B",
quantization_config=quant_config,
device_map="auto" # Automatically assigns model parts to available devices
)
# Create a text-generation pipeline.
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Provide a text prompt and generate output.
prompt = "How does the location of the Sydney Conservatorium of Music impact the academic and professional opportunities available to music students, and how does the conservatorium support student engagement with the music industry in Australia? output<think>"
output = pipe(prompt)
print(output)
Recommended Parameters
outputs = model.generate(
**inputs,
max_length=300,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.2
)
Uploaded model
- Developed by: EpistemeAI
- License: apache-2.0
- Finetuned from model : unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
Ethical Considerations
Do Not Use For:
- Medical/legal advice without expert oversight.
- Generating partisan or culturally insensitive content.
Limitations:
- May occasionally produce plausible but incorrect scientific explanations.
- Not fully immune to subtle biases.
Thank you
We appreciate the companies as following: Unsloth, Meta and Deepseek.
License
This model is licensed under [apache-2.0] - see LICENSE for details.
Citation
@misc{Fireball-R1-Llama-3.1-8B,
author = {EpistemeAI},
title = {Fireball-R1-8B: A Neutral, Science-Optimized Language Model},
year = {2025},
url = {https://huggingface.co/EpistemeAI/Fireball-R1-Llama-3.1-8B}
}
For support or feedback: contact us at [email protected]
Uploaded model
- Developed by: EpistemeAI
- License: apache-2.0
- Finetuned from model : EpistemeAI/Fireball-R1-Llama-3.1-8B
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 119
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.