This is an ExLlamaV2 quantized model in 4bpw of mpasila/JP-EN-Translator-1K-steps-7B-merged using the default calibration dataset.

Original Model card

Experimental model, may not perform that well. Dataset used is a modified version of NilanE/ParallelFiction-Ja_En-100k.

Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples).

Prompt format: Alpaca

Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : augmxnt/shisa-base-7b-v1

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
11
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for mpasila/JP-EN-Translator-1K-steps-7B-merged-exl2-4bpw

Quantized
(2)
this model

Datasets used to train mpasila/JP-EN-Translator-1K-steps-7B-merged-exl2-4bpw

Collection including mpasila/JP-EN-Translator-1K-steps-7B-merged-exl2-4bpw