Model Card for TrillionHelp

TrillionHelp uses trillionlabs/Trillion-7B-preview as the backbone.

Model Details

This model is fine-tuned on the namelessai/helply dataset designed to enhance mental health reasoning capabilities.

Model Description

This model was fine-tuned for assisting pyschologists in assiting patients.

  • Developed by: Alex Scott
  • Model type: Language Model, Adapter Model (available in a folder in the model repo)
  • Finetuned from model: trillionlabs/Trillion-7B-preview

Usage (Adapter Only, full model snippet coming soon)

Use the code snippet below to load the base model and apply the adapter for inference:

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load the base model
base_model_name = "trillionlabs/Trillion-7B-preview"
adapter_path = "/path/to/adapter"  # Replace with actual adapter path
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(base_model_name)

# Apply the adapter
model = PeftModel.from_pretrained(base_model, adapter_path)
model = model.merge_and_unload()

# Run inference
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
7
Safetensors
Model size
7.53B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for namelessai/TrillionHelp-7.5b

Finetuned
(1)
this model

Dataset used to train namelessai/TrillionHelp-7.5b

Collection including namelessai/TrillionHelp-7.5b