YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

fine-tuned-gpt-neo - bnb 8bits

Original model description:

license: mit language: - en base_model: - EleutherAI/gpt-neo-1.3B library_name: transformers

Fine-tuned GPT-Neo Model

This is a fine-tuned version of GPT-Neo for specific tasks.

Model Details

  • Model Type: GPT-Neo
  • Fine-tuned for: [Specify tasks or datasets]

Usage

To use the model, run the following code:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Torrchy/fine-tuned-gpt-neo")
tokenizer = AutoTokenizer.from_pretrained("Torrchy/fine-tuned-gpt-neo")
Downloads last month
0
Safetensors
Model size
1.32B params
Tensor type
F32
FP16
I8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.