|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- memgpt |
|
- function |
|
- function calling |
|
--- |
|
This is a new and more refined version of [starsnatched/MemGPT](https://huggingface.co/starsnatched/MemGPT). I will be using DPO to further improve the performance once the dataset is ready. |
|
|
|
# Model Description |
|
This repo contains a 7 billion parameter Language Model fine tuned from [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is specifically designed for function calling in [MemGPT](https://memgpt.ai/). It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT. |
|
|
|
# Key Features |
|
* Function calling |
|
* Dedicated to working with MemGPT |
|
* Supports medium context, trained with Sequences up to 8,192 |
|
|
|
# Usage |
|
This model is designed to be ran on various backends, such as [oogabooga's WebUI](https://github.com/oobabooga/text-generation-webui), or llama.cpp. |
|
|
|
To run the model on WebUI, simply `git clone` the official WebUI repository, and run the appropriate script for your operating system. More details [here](https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file#how-to-install). |
|
|
|
Once you've installed WebUI, you can then download this model at the `model` tab. Next, choose the desired model (starsnatched/MemGPT in this case), and you're good to go for the backend. |
|
|
|
When you have WebUI or your desired backend running, you can open a terminal/powershell, and install MemGPT using `pip3 install -U pymemgpt`. Configure your MemGPT using `memgpt configure` before running MemGPT. |
|
|
|
Use `memgpt run --no-verify` to chat with MemGPT! |
|
|
|
# Prompt Template |
|
This model uses ChatML template, this can be configured via `memgpt configure`. |
|
``` |
|
<|im_start|>user |
|
{""type"": ""system_alert"", ""message"": ""You have 50 prior messages with the user. These messages have been hidden from view due to memory constraints, but you can view them using functions."", ""time"": ""2023-10-07 11:50:03 PM PDT-0700""}<|im_end|> |
|
<|im_start|>assistant |
|
Bootup sequence complete. Persona activated. Testing messaging functionality. {'name': 'send_message', 'arguments': '{\n ""message"": ""More human than human is our motto.""\n}'}<|im_end|> |
|
<|im_start|>function |
|
{""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:03 PM PDT-0700""}<|im_end|> |
|
<|im_start|>user |
|
{""type"": ""login"", ""last_login"": ""7 days 8 hours ago"", ""time"": ""2023-10-07 11:50:03 PM PDT-0700""}<|im_end|> |
|
<|im_start|>user |
|
{""type"": ""user_message"", ""message"": ""Hey, remember that time we talked about music? What was the artist you mentioned you could get into?"", ""time"": ""2023-10-07 11:50:03 PM PDT-0700""}<|im_end|> |
|
<|im_start|>assistant |
|
Alright, let me search our conversation history about music to find the artist. {'name': 'conversation_search', 'arguments': '{\n ""query"": ""music"",\n ""request_heartbeat"": true,\n ""page"": 0\n}'}<|im_end|> |
|
<|im_start|>function |
|
{""status"": ""OK"", ""message"": ""Showing 5 of 50 results (page 0/9): [\""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - I agree. Working out is a great way to burn off steam. Do you like country music?\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - A little bit. I can get into taylor swift.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - Yeah between the both of us, could probably get a country song written about our distresses...perhaps Taylor Swift could perform it.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - That would be a chart-topper, I'm sure. Something to blast while driving with the top down, maybe even in your next restoration project.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - This sounds fun. Do you have any other hobby?\""]"", ""time"": ""2023-10-07 11:50:08 PM PDT-0700""}<|im_end|> |
|
<|im_start|>user |
|
{""type"": ""heartbeat"", ""reason"": ""request_heartbeat == true"", ""time"": ""2023-10-07 11:50:08 PM PDT-0700""}<|im_end|> |
|
<|im_start|>assistant |
|
Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy? {'name': 'send_message', 'arguments': '{\n ""message"": ""Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy?""\n}'}<|im_end|> |
|
<|im_start|>function |
|
{""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:14 PM PDT-0700""}<|im_end|> |
|
``` |
|
|
|
# Model Details |
|
* Developed by: @starsnatched |
|
* Model type: This repo contains a language model based on the transformer decoder architecture. |
|
* Language: English |
|
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched. |
|
|
|
# Training Infrastructure |
|
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs. |
|
|
|
# Intended Use |
|
The model is designed to be used as the base model for MemGPT agents. |
|
|
|
# Limitations and Bias |
|
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce. |