Uploaded model

  • Developed by: vinimuchulski
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-2-2b-it-bnb-4bit

This gemma2 model was trained 2x faster with Unsloth and Huggingface's TRL library.


# Agente de Chamada de Fun莽茫o com LangChain e Prompt Personalizado

Este projeto implementa um agente baseado em LangChain com um prompt personalizado para realizar chamadas de fun莽茫o, utilizando o modelo `GEMMA-2-2B-it-GGUF-function_calling` hospedado no Hugging Face.

## Descri莽茫o

O c贸digo cria um agente que utiliza ferramentas personalizadas e um modelo de linguagem para responder perguntas com base em um fluxo estruturado de pensamento e a莽茫o. Ele inclui uma ferramenta personalizada (`get_word_length`) que calcula o comprimento de uma palavra e um prompt ReAct modificado para guiar o racioc铆nio do agente.

## Pr茅-requisitos

- Python 3.8+
- Bibliotecas necess谩rias:
  ```bash
  pip install langchain langchain-ollama

C贸digo

Aqui est谩 o c贸digo principal:

from langchain.agents import AgentExecutor
from langchain.agents import tool, create_react_agent
from langchain import hub
from langchain_ollama.llms import OllamaLLM
from langchain.prompts import PromptTemplate

# Definir o modelo
MODEL = "hf.co/vinimuchulski/GEMMA-2-2B-it-GGUF-function_calling:latest"
llm = OllamaLLM(model=MODEL)

# Criar ferramenta personalizada
@tool
def get_word_length(word: str) -> int:
    """Retorna o comprimento de uma palavra."""
    return len(word)

# Definir prompt personalizado
custom_react_prompt = PromptTemplate(
    input_variables=["input", "agent_scratchpad", "tools", "tool_names"],
    template="""Answer the following questions as best you can. You have access to the following tools:

{tools}

Use the following format:

Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action, formatted as a string
Observation: the result of the action
Thought: I now know the final answer
Final Answer: the final answer to the original input question

Example:
Question: What is the length of the word "hello"?
Thought: I need to use the get_word_length tool to calculate the length of the word "hello".
Action: get_word_length
Action Input: "hello"
Observation: 5
Thought: I now know the length of the word "hello" is 5.
Final Answer: 5

Begin!

Question: {input}
Thought: {agent_scratchpad}"""
)

# Configurar ferramentas
tools = [get_word_length]
tools_str = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
tool_names = ", ".join([tool.name for tool in tools])

# Criar o agente
agent = create_react_agent(
    tools=tools,
    llm=llm,
    prompt=custom_react_prompt.partial(tools=tools_str, tool_names=tool_names),
)

# Criar o executor
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    handle_parsing_errors=True
)

# Testar o agente
question = "What is the length of the word PythonDanelonAugustoTrajanoRomanovCzarVespasianoDiocleciano?"
response = agent_executor.invoke({"input": question})
print(response)
Downloads last month
84
GGUF
Model size
2.61B params
Architecture
gemma2

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for vinimuchulski/GEMMA-2-2B-it-GGUF-function_calling

Quantized
(42)
this model