You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Introduction

Flock Web3 Agent Model is aimed at helping process function call queries in the specific web3 domain.

Requirements

We advise you to use the latest version of transformers.

Quickstart

Given a query and a list of available tools. The model generate function calls using the provided tools to respond the query correctly.

Example query and tools format

input_example=
{
  "query": "Track crosschain message verification, implement timeout recovery procedures.",
  "tools": [
    {"type": "function", "function": {"name": "track_crosschain_message", "description": "Track the status of a crosschain message", "parameters": {"type": "object", "properties": {"message_id": {"type": "string"}}}}},
    {"type": "function", "function": {"name": "schedule_timeout_check", "description": "Schedule a timeout check for a message", "parameters": {"type": "object", "properties": {"message_id": {"type": "string"}, "timeout": {"type": "integer"}}}}}
  ]
}

Function calling generation

from transformers import AutoModelForCausalLM, AutoTokenizer
import json

model_name = "flock-io/Flock_Web3_Agent_Model"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

messages = [
    {"role": "system", "content": "You are a helpful assistant with access to the following functions. Use them if required -"
                                   + json.dumps(input_example["tools"], ensure_ascii=False)},
    {"role": "user", "content": input_example["query"]}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=3000
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

The output text is in the string format

[
    {"name": "track_crosschain_message", "arguments": {"message_id": "msg12345"}},
    {"name": "schedule_timeout_check", "arguments": {"message_id": "msg12345", "timeout": "30"}}
  ]
Downloads last month
1
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for flock-io/Flock_Web3_Agent_Model

Base model

Qwen/Qwen2.5-7B
Finetuned
(524)
this model