gsarti's picture
Update README.md
e512909 verified
metadata
language:
  - it
license: apache-2.0
library_name: transformers
tags:
  - text-generation-inference
  - unsloth
  - mistral
  - trl
  - word-game
  - rebus
  - italian
  - word-puzzle
  - crossword
datasets:
  - gsarti/eureka-rebus
base_model: unsloth/Phi-3-mini-4k-instruct-v0-bnb-4bit
model-index:
  - name: gsarti/phi3-mini-rebus-solver-fp16
    results:
      - task:
          type: verbalized-rebus-solving
          name: Verbalized Rebus Solving
        dataset:
          type: gsarti/eureka-rebus
          name: EurekaRebus
          config: llm_sft
          split: test
          revision: 0f24ebc3b66cd2f8968077a5eb058be1d5af2f05
        metrics:
          - type: exact_match
            value: 0.56
            name: First Pass Exact Match
          - type: exact_match
            value: 0.51
            name: Solution Exact Match

Phi-3 Mini 4K Verbalized Rebus Solver - PEFT Adapters 🇮🇹

This model is a parameter-efficient fine-tuned version of Phi-3 Mini 4K trained for verbalized rebus solving in Italian, as part of the release for our paper Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses. The task of verbalized rebus solving consists of converting an encrypted sequence of letters and crossword definitions into a solution phrase matching the word lengths specified in the solution key. An example is provided below.

The model was trained in 4-bit precision for 5070 steps on the verbalized subset of the EurekaRebus using QLora via Unsloth and TRL. This repository contains PEFT-compatible adapters saved throughout training. Use the revision=<GIT_HASH> parameter in from_pretrained to load mid-training adapter checkpoints.

We also provide a merged version of the model and 8-bit GGUF versions of this model for analysis and local execution.

Using the Model

The following example shows how to perform inference using Unsloth or Transformers:


# With Unsloth (efficient, requires GPU)
from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "gsarti/phi3-mini-rebus-solver-adapters",
    max_seq_length = 1248,
    load_in_4bit = True,
)

# Inference
verbalized_rebus = "[Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]"
solution_key = "1 ' 5 6 5 3 3 1 14"
template = """<s><|user|>
Risolvi gli indizi tra parentesi per ottenere una prima lettura, e usa la chiave di lettura per ottenere la soluzione del rebus.

Rebus: {rebus}
Chiave risolutiva: {key}<|end|>
<|assistant|>"""
input = template.format(rebus=verbalized_rebus, key=solution_key)
inputs = tokenizer(input, return_tensors="pt")["input_ids"]
outputs = model.generate(input_ids = inputs, max_new_tokens = 500, use_cache = True)
model_generations = tokenizer.batch_decode(outputs)
print(model_generations[0])

# Procediamo alla risoluzione del rebus passo per passo:
# - [Materiale espulso dai vulcani] = lava
# - R O = R O
# - [Strumento del calzolaio] = lesina
# - [Si trovano ai lati del bacino] = anche
# - C I = C I
# - [Si ingrassano con la polenta] = oche
# - E I N = E I N
# - [Contiene scorte di cibi] = silos
# - B = B
# - [Isola in francese] = ile
# 
# Prima lettura: lava R O lesina anche C I oche E I N silos B ile
# 
# Ora componiamo la soluzione seguendo la chiave risolutiva:
# 1 = L
# ' = '
# 5 = avaro
# 6 = lesina
# 5 = anche
# 3 = ciò
# 3 = che
# 1 = è
# 14 = insilosbile
# 
# Soluzione: L'avaro lesina anche ciò che è insilosbile

See the official code release for more examples.

Local usage with Ollama

A ready-to-use local version of this model is hosted on the Ollama Hub and can be used as follows:

ollama run gsarti/phi3-mini-rebus-solver "Rebus: [Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]\nChiave risolutiva: 1 ' 5 6 5 3 3 1 14"

Limitations

Lexical overfitting: As remarked in the related publication, the model overfitted the set of definitions/answers for first pass words. As a result, words that were explicitly witheld from the training set cause significant performance degradation when used as solutions for verbalized rebuses' definitions. You can compare model performances between in-domain and out-of-domain test examples to verify this limitation.

Model curators

For problems or updates on this model, please contact [email protected].

Citation Information

If you use this model in your work, please cite our paper as follows:

@article{sarti-etal-2024-rebus,
    title = "Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses",
    author = "Sarti, Gabriele and Caselli, Tommaso and Nissim, Malvina and Bisazza, Arianna",
    journal = "ArXiv",
    month = jul,
    year = "2024",
    volume = {abs/2408.00584},
    url = {https://arxiv.org/abs/2408.00584},
}

Acknowledgements

We are grateful to the Associazione Culturale "Biblioteca Enigmistica Italiana - G. Panini" for making its rebus collection freely accessible on the Eureka5 platform.