arcadex-llm / README.md
ogmatrixllm's picture
Update README.md
80fa9fb verified
metadata
license: apache-2.0
tags:
  - LoRA
  - 4-bit
  - BF16
  - FlashAttn2
  - Pokémon
  - EMA
  - fast-training
  - text-generation
  - chat
  - transformers
language: en
datasets:
  - ogmatrixllm/pokemon-lore-instructions
finetuned_from: Qwen/Qwen2.5-7B-Instruct
tasks:
  - text-generation
metrics:
  - accuracy
  - code_eval
base_model:
  - Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation

Qwen2.5-Coder-7B LoRA 4-bit BF16 w/ FlashAttn2, short seq=512 for faster iteration

This is a LoRA-fused model based on Qwen/Qwen2.5-7B-Instruct.

Model Description

  • Model Name: Qwen2.5-Coder-7B LoRA 4-bit BF16 w/ FlashAttn2, short seq=512 for faster iteration
  • Language: en
  • License: apache-2.0
  • Dataset: ogmatrixllm/pokemon-lore-instructions
  • Tags: LoRA, 4-bit, BF16, FlashAttn2, Pokémon, EMA, fast-training, text-generation, chat, transformers

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("ogmatrixllm/arcadex-llm")
model = AutoModelForCausalLM.from_pretrained("ogmatrixllm/arcadex-llm")

prompt = "Hello, world!"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))