--- base_model: unsloth/gemma-2-2b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma2 - trl --- # Uploaded model - **Developed by:** akshayballal - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. ### Usage ```python from transformers import TextStreamer alpaca_prompt = """Below are the tools that you have access to these tools. Use them if required. ### Tools: {} ### Query: {} ### Response: {}""" tools = [ { "name": "upcoming", "description": "Fetches upcoming CS:GO matches data from the specified API endpoint.", "parameters": { "content_type": { "description": "The content type for the request, default is 'application/json'.", "type": "str", "default": "application/json", }, "page": { "description": "The page number to retrieve, default is 1.", "type": "int", "default": "1", }, "limit": { "description": "The number of matches to retrieve per page, default is 10.", "type": "int", "default": "10", }, }, } ] query = """Can you fetch the upcoming CS:GO matches for page 1 with a 'text/xml' content type and a limit of 20 matches? Also, can you fetch the upcoming matches for page 2 with the 'application/xml' content type and a limit of 15 matches?""" FastLanguageModel.for_inference(model) model_input = tokenizer(alpaca_prompt.format(tools, query, ""), return_tensors="pt") output = model.generate(**input, max_new_tokens=1024, temperature = 0.0) decoded_output = tokenizer.decode(output[0], skip_special_tokens=True) ``` [](https://github.com/unslothai/unsloth)