[FEATURE] Community Tools

#569
by nsarrazin HF staff - opened
Hugging Chat org
edited Sep 20

thumbnail.png

Community Tools on HuggingChat!

Today we're releasing Community Tools on HuggingChat. This feature lets you create custom tools using Hugging Face Spaces! With this feature we're also making it possible for tools to use new modalities, like video, speech and more! You can find community tools on the following page: https://huggingface.co/chat/tools

In order to use tools in your conversations you will need to use a model compatible with tool calling. Currently we support the following models:

  • meta-llama/Meta-Llama-3.1-70B-Instruct
  • CohereForAI/c4ai-command-r-plus-08-2024

It's now also possible to add Community Tools to your assistants! When you create an assistant with a compatible model, you will see a search bar for tools where you can add your own tools. This lets you group tools together with a system prompt to create more complete experiences using assistants.

We've also created a blog post to explain how to create tools, either turning existing Spaces into tools or creating your own custom Spaces. You can find it here 📝!

The feature is still new and we're excited to hear your feedback. Feel free to share your tools in this thread, we will review them and possibly feature them on the community page!

nsarrazin pinned discussion

Thank you, Huggingface and all huggingchat team for this amazing feature.
Here are some tools I created:

  1. Chat with Image and video (uses qwen2 vl 7b)
    https://hf.co/chat/tools/66e85bb396d054c5771bc6cb

(video is not working in Llama 3.1 70b IDK why)

  1. Critical Thinker
    https://hf.co/chat/tools/66e864a034d40bac65628668

  2. Flux Dev Fast
    https://hf.co/chat/tools/66e9b279bbf94ad91c808f68

Dunno how to use
for example, I make dice roll active. what do i do to call it into the chat?

Dunno how to use
for example, I make dice roll active. what do i do to call it into the chat?

You can say model to use the specific ACTIVE TOOL and it will use it. Alternatively, during regular conversations, it will auto use them according to need.

What's the reason that the tool selector states there are 83 available but when you browse them there are only 28 to choose from?

Hugging Chat org

@ChrisUnimportant counter was showing the total number of tools but we only show featured ones, will fix 😅

Better Doc reader: https://hf.co/chat/tools/66ed8236a35891a61e2bfcf2

This tool can read all types of file and contents.
From code files to Excel, Pdf to PPT, Docx to Csv. and Many more...

Hugging Chat org

@KingNish Nice, Just tried it and it's super fast! Featured it as well :)

image.png
https://hf.co/chat/tools/66eb3ef40d03fd270ba657f3

Tool not detected, not getting invoked properly, and also the output

image.png

image.png

image.png

image.png

You should update the docs of hugging chat-ui on github because it is extremely confusing how to add tools support on self-hosted chat-ui

Ich möchte krank melden

I am developing a Telegram bot that includes the Huggingface API to provide global responses for an interactive game. I need to know if the API has access to the "Tools Beta" feature, as this is critical to the functionality of our game. Or please tell me what code is available in the open source so that this can be implemented directly on the computer?

image.png

I can't play the audio file, I'm using a screen reader.

When I have the web search tool enabled, only 1 search request is performed per message. This is problematic when a request requires multiple searches to complete. Is it possible to add functionality that allows for multiple searches in a single response?

https://hf.co/chat/r/o0Y2ctc?leafId=6593e644-3924-41a8-82cc-9f72d969f45a

Here is an example. The first message requests for a web search that provides a list of birds. The first branch of the second message requests searches for details about each species of bird. The current chat system only performs 1 search, which corresponds to the first bird in the list. The remaining descriptions are not from the search results and contain information that was not in the search and hallucinated references.

The second branch of the second message makes the same request, but instructs the model to only handle 1 request at a time. The results are as expected but manual intervention is required for each item in the list.

As a user, I would like to be able to submit the first prompt of the second message and have the model output perform a web search for each item in the list and output the result, all in a single message and without user intervention for each item.

Sign up or log in to comment