Halcyon-1B

Halcyon-1B is a creatively fine-tuned variant of the unsloth/gemma-3-1b-it-unsloth-bnb-4bit model, specifically tailored for imaginative and expressive creative writing tasks. This model has been fine-tuned to excel in storytelling, literary exploration, and nuanced narrative construction.


Model Details


Dataset

This model was fine-tuned using the (Nitral-AI) Creative Writing ShareGPT dataset.


Capabilities

  • Creative Writing: Exceptional at generating narratives, stories, poetry, and prose.
  • Expressive Nuance: Generates sophisticated, context-aware, and evocative literary outputs.
  • Versatility: Suitable for writers, creators, educators, and storytellers looking to harness AI for enhanced creative exploration.

Intended Use

  • Creative Inspiration: Idea generation, overcoming writer’s block, and expanding narrative horizons.
  • Educational Tools: Supporting literature courses, workshops, and creative writing sessions.
  • Interactive Storytelling: Enabling interactive fiction, dynamic content creation, and innovative narrative formats.

Usage

You can quickly test Halcyon-1B using Huggingface Transformers:

from unsloth import FastModel
from transformers import TextStreamer

# Load model and tokenizer
model, tokenizer = FastModel.from_pretrained(
    model_name = "colesmcintosh/Halcyon-1B",
    max_seq_length = 2048,
    load_in_4bit = True,
)

# Format prompt using Gemma-3 chat template
messages = [{
    "role": "user",
    "content": [{"type" : "text", "text" : "Write a mythological tale about how the oceans came to be."}]
}]

text_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
text_str = tokenizer.decode(text_ids)

# Generate response
outputs = model.generate(
    **tokenizer([text_str], return_tensors="pt").to("cuda"),
    max_new_tokens=64,
    temperature=1.0,
    top_p=0.95,
    top_k=64,
    streamer=TextStreamer(tokenizer, skip_prompt=True),
)
Downloads last month
19
Safetensors
Model size
1,000M params
Tensor type
BF16
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for colesmcintosh/Halcyon-1B

Finetuned
(54)
this model
Quantizations
1 model

Dataset used to train colesmcintosh/Halcyon-1B

Space using colesmcintosh/Halcyon-1B 1