Edit model card

Uploaded model

  • Developed by: junelegend
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-bnb-4bit

Model Details

This model is finetuned on adeocybersecurity/DockerCommand dataset using the base unsloth/llama-3-8b-bnb-4bit model. These are only the lora adapaters of the model, the base model is automatically downloaded.

How to use

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
  model_name = "llama-3-docker-command-lora",
  max_seq_length = max_seq_length,
  dtype = dtype,
  load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

inputs = tokenizer(
[
    alpaca_prompt.format(
        "translate this sentence in docker command.", # instruction
        "Give me a list of all containers, indicating their status as well.", # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for junelegend/llama-3-docker-command-lora

Finetuned
(2350)
this model

Dataset used to train junelegend/llama-3-docker-command-lora