[FEATURE] Community Tools

#569
by nsarrazin HF staff - opened
Hugging Chat org
edited Sep 20

thumbnail.png

Community Tools on HuggingChat!

Today we're releasing Community Tools on HuggingChat. This feature lets you create custom tools using Hugging Face Spaces! With this feature we're also making it possible for tools to use new modalities, like video, speech and more! You can find community tools on the following page: https://huggingface.co/chat/tools

In order to use tools in your conversations you will need to use a model compatible with tool calling. Currently we support the following models:

  • meta-llama/Meta-Llama-3.1-70B-Instruct
  • CohereForAI/c4ai-command-r-plus-08-2024

It's now also possible to add Community Tools to your assistants! When you create an assistant with a compatible model, you will see a search bar for tools where you can add your own tools. This lets you group tools together with a system prompt to create more complete experiences using assistants.

We've also created a blog post to explain how to create tools, either turning existing Spaces into tools or creating your own custom Spaces. You can find it here 📝!

The feature is still new and we're excited to hear your feedback. Feel free to share your tools in this thread, we will review them and possibly feature them on the community page!

nsarrazin pinned discussion

Thank you, Huggingface and all huggingchat team for this amazing feature.
Here are some tools I created:

  1. Chat with Image and video (uses qwen2 vl 7b)
    https://hf.co/chat/tools/66e85bb396d054c5771bc6cb

(video is not working in Llama 3.1 70b IDK why)

  1. Critical Thinker
    https://hf.co/chat/tools/66e864a034d40bac65628668

  2. Flux Dev Fast
    https://hf.co/chat/tools/66e9b279bbf94ad91c808f68

Dunno how to use
for example, I make dice roll active. what do i do to call it into the chat?

Dunno how to use
for example, I make dice roll active. what do i do to call it into the chat?

You can say model to use the specific ACTIVE TOOL and it will use it. Alternatively, during regular conversations, it will auto use them according to need.

What's the reason that the tool selector states there are 83 available but when you browse them there are only 28 to choose from?

Hugging Chat org

@ChrisUnimportant counter was showing the total number of tools but we only show featured ones, will fix 😅

Better Doc reader: https://hf.co/chat/tools/66ed8236a35891a61e2bfcf2

This tool can read all types of file and contents.
From code files to Excel, Pdf to PPT, Docx to Csv. and Many more...

Hugging Chat org

@KingNish Nice, Just tried it and it's super fast! Featured it as well :)

Looks like your link is broken: it should be https://huggingface.co/chat/tools instead of https://huggingface.co/chat/tool.

Hugging Chat org

Oof good catch @EveryPizza , fixed!

I've activated a tool, but can't figure out how to use it? The chat interface doesn't seem to know how to use it. It just gives a normal response as if it didn't ask to use the tool. Tried the image generation (flux) one, and the Voice cloner Tool ( in specs it mentions /predict and create_voice function name). It doesn't work to put those into the chat.

I've activated a tool, but can't figure out how to use it? The chat interface doesn't seem to know how to use it. It just gives a normal response as if it didn't ask to use the tool. Tried the image generation (flux) one, and the Voice cloner Tool ( in specs it mentions /predict and create_voice function name). It doesn't work to put those into the chat.

Hi fellow Scott!

coincidently, I asked this same question, heres the answer:

Dunno how to use
for example, I make dice roll active. what do i do to call it into the chat?

You can say model to use the specific ACTIVE TOOL and it will use it. Alternatively, during regular conversations, it will auto use them according to need.

https://hf.co/chat/tools/66f1a8159d41ad4398ebb711
This tool helps you extract any content (like: images, videos, pdfs, docs , etc.)

Example Chat:
https://hf.co/chat/r/0zCEqI_

https://hf.co/chat/tools/66f1a8159d41ad4398ebb711
This tool helps you extract any content (like: images, videos, pdfs, docs , etc.)

Another update Now it also can now also extract normal webpages and also resolved issue in link fetching and image fetching in default Fetch URL.
@nsarrazin can you please also feature it.

Is it support multiple function?, my Space has 2 function and I want to add all my function to my tool.

Hugging Chat org

@eienmojiki you will have to create two tools for this, but should not be an issue otherwise! you can combine them in an assistant if you want afterwards

I've tried configuring a tool from one of our spaces, but when I "save" it in the end it just briefly shows "Saving..." and stays on the same configuration page. The tool doesn't appear in the list of available tools.

Hugging Chat org

Could you share the space in question ? @jgrivolla

Could you share the space in question ? @jgrivolla

I tried with this space: BSC-LT/EADOP_RAG_EXPERIMENTAL

It's a simple RAG system, and the idea was to use just the retrieval portion (the idea is to then slim it down to just the required functionality for actual use, but for testing I used the existing space). Input is input_, output is markdown-2 (the retrieval result / context), all parameters fixed to their default value. Other than those configurations I just added the tool description and the description of the input_ argument.

Document parser fails at times.

Document parser fails at times.

https://hf.co/chat/tools/66ed8236a35891a61e2bfcf2

Try this.

Roughly when will agents be a thing?

I can't save the tool.

Hugging Chat org

could you share which space you tried to use ? @Taf2023

Image edition says "error: no tool found", and for image generation, the chat says "can't generate an image" (it doesn't try to load the extension at all) so both of these tools does not work at all

Please add tools support to assistants which uses the model supporting tools at least.
Those who like this idea 💖this comment

Hugging Chat org

That should already be supported! Try creating an assistant with command R+ or llama 3.1 70B and you should be able to select tools to add to your assistant. @KarthiDreamr

That should already be supported! Try creating an assistant with command R+ or llama 3.1 70B and you should be able to select tools to add to your assistant. @KarthiDreamr

I don't see any text to select tools.

Hugging Chat org

Whoops guys, feel free to try again, the feature should be there now 😅

Whoops guys, feel free to try again, the feature should be there now 😅

very good

Taf2023/HTML-Generator
SyntaxError: Unexpected end of JSON input

Whoops guys, feel free to try again, the feature should be there now 😅

@nsarrazin there is no option to upload files or images in the assistant. Even when it's using tool which supports file upload.

This comment has been hidden

Hope this gets added to the recent Nvidia model, this feature seems pretty cool.

On the topic of that, it should be noted on the frontend that this is currently limited to those two models. I just had a bit of confusion (as I'd imagine other people would have) as to why the model wasn't interacting or acknowledging any of the tools I activated, right up until I went to this discussion page where it mentions that limitation.

I've created a Space using the default free CPU basic hardware set to 'public', yet when trying to create a tool using it, there's an error saying:
SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data

Is there any specific reason why my Space can't be used as a tool?

Hugging Chat org

Could you share the tool? I could take a look @testnow720

@nsarrazin there is no option to upload files or images in the assistant. Even when it's using tool which supports file upload.

Hugging Chat org

Could you share an assistant that has this issue @KingNish ?

Could you share an assistant that has this issue @KingNish ?

https://hf.co/chat/assistant/6710d67d57b98c857f7259db

Hugging Chat org

Thanks, i can reproduce the issue, let me take a look

Could you share the tool? I could take a look @testnow720

The Space is up at : testnow720/hfchat_code_execution

(I couldn't create a tool that uses the Space because of said error)

Hugging Chat org

@KingNish the issue should be fixed!

That should already be supported! Try creating an assistant with command R+ or llama 3.1 70B and you should be able to select tools to add to your assistant. @KarthiDreamr

You developers are 🔮 real wizards 🪄

Hi everyone! Adding tools to assistants is the best thing that ever happened to HuggingChat. Buuut, you can add only up to 3 tools. I wanted to create a Swiss knife assistant... It would be really good if it was limited to 5

Posting a solution for anyone encountering the following error while using a Space to create a tool :
SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data

I resolved the issue by changing the sdk_version keyvalue for Gradio to 4.41.0 in the README.md file (I am not sure if every 4.xx.x releases will work)

This comment has been hidden

Could someone can make this super useful tool : GENERATE DIAGRAMS (for example with Mermaid) and display them directly in Hugginchat with this tool ?

It'll be so great (since only Claude has this feature for the moment, and it's not working very good).

Example :
Capture d'écran 2024-11-02 171019.png

So I am using a duplicated rag tool template space as I have done before with the goal of making an up to date gradio docs community tool as the existing one (that uses Nymbo/RAG-Tool-HuggingChat) is broken giving Runtime error Exit code: 128. Reason: failed to create containerd task: failed to create shim task: context deadline exceeded: unknown) and prior to breaking was not being updated with any 5.0 or greater gradio docs so is no longer usable [1].

For my gradio docs RAG space when I use CPU free tier I am getting

Runtime error
Launch timed out, workload was not healthy after 30 min

If I use zeroGPU hardware tier the space works fine which I am okay with being the resolution but on the free CPU hardware tier it errors out during start up. I assume from the total amount of content in the sources directory being larger than time allows to embed entirely.
So my question is I am generally curious to read up on ways to improve the embedding process for better optimized processing approaches to the space or even better approaches entirely and just rewriting the space with better approaches from scratch or what is best?

My gradio docs RAG space is https://huggingface.co/spaces/Csplk/gradio_docs_rag_hfchattool

I initially used dev mode as it is just easier to interact with a space with to generate and put the files into the sources directory as documented in https://huggingface.co/spaces/Csplk/gradio_docs_rag_hfchattool/blob/main/how2_generate_gradio_docs.txt but when I had generated and put the files into sources and pushed the changes switched back to non dev mode however I can’t see why that would cause issues since it works fine when I use zeroGPU as mentioned but thought I would note this incase it helps clarify why the issue is occurring on cpu hardware.

1 I made an issue awhile ago asking for an update the docs used by the existing tools space to be made current but the issue has not been looked at yet after a month or so of posting it so @Nymbo if you see this post and want to update your sources files to the latest gradio version docs then go for it as your tool is already with usage so don’t wanna confuse people with two gradio docs tools if not needed or let me know how you generated the files for yours as I am using a different method with json files instead of markdown files currently via the generate_jsons script in gradio repo and I will use those instead of json files I use in mine going forward.

When I try to use the working zeroGPU based hardware tier gradio space to add a new tool I am getting the following error Syntax error the string did not match the expected pattern Any thoughts?
IMG_1229.jpeg

stabilityai/stable-diffusion-3.5-large-turbo
SyntaxError: Unexpected end of JSON input

Hi guys. I keep getting errors when calling tools from either of the instruct models, (Cohere or Meta). I'm just running this in a browser on a normal laptop, part of the beauty of hugging chat, but I've started getting "No API found" type errors whenever the model needs to call a tool. Is this a problem on my side, or something broken or missing on your side? Thanks for reading.

i tried using better document reader, document, document parser to chat with uploaded document using model cohere ai, or llama in built with hugging chat, but it always fails, shows no document found. please rectify error

Help to fix this please :
Choose functions that can be called in your tool.

SyntaxError: Failed to execute 'json' on 'Response': Unexpected end of JSON input
Screenshot from 2024-11-18 07-05-10.png

image.png

image.png
https://hf.co/chat/tools/66eb3ef40d03fd270ba657f3

Tool not detected, not getting invoked properly, and also the output

image.png

image.png

image.png

image.png

You should update the docs of hugging chat-ui on github because it is extremely confusing how to add tools support on self-hosted chat-ui

Ich möchte krank melden

I am developing a Telegram bot that includes the Huggingface API to provide global responses for an interactive game. I need to know if the API has access to the "Tools Beta" feature, as this is critical to the functionality of our game. Or please tell me what code is available in the open source so that this can be implemented directly on the computer?

image.png

I can't play the audio file, I'm using a screen reader.

When I have the web search tool enabled, only 1 search request is performed per message. This is problematic when a request requires multiple searches to complete. Is it possible to add functionality that allows for multiple searches in a single response?

https://hf.co/chat/r/o0Y2ctc?leafId=6593e644-3924-41a8-82cc-9f72d969f45a

Here is an example. The first message requests for a web search that provides a list of birds. The first branch of the second message requests searches for details about each species of bird. The current chat system only performs 1 search, which corresponds to the first bird in the list. The remaining descriptions are not from the search results and contain information that was not in the search and hallucinated references.

The second branch of the second message makes the same request, but instructs the model to only handle 1 request at a time. The results are as expected but manual intervention is required for each item in the list.

As a user, I would like to be able to submit the first prompt of the second message and have the model output perform a web search for each item in the list and output the result, all in a single message and without user intervention for each item.

Sign up or log in to comment