RonanMcGovern commited on
Commit
7c71032
1 Parent(s): 8203e77

add llama cpp example

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -71,6 +71,45 @@ Note that you'll still need to code the server-side handling of making the funct
71
  **Run on your laptop**
72
  Run on your laptop [video and juypter notebook](https://youtu.be/nDJMHFsBU7M)
73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
  ## Syntax
75
 
76
  ### Prompt Templates
 
71
  **Run on your laptop**
72
  Run on your laptop [video and juypter notebook](https://youtu.be/nDJMHFsBU7M)
73
 
74
+ After running llama.cpp server, you can call the server with this command, with thanks to @jdo300:
75
+ ```
76
+ import requests
77
+ import json
78
+
79
+ # Define the roles and markers
80
+ B_INST, E_INST = "[INST]", "[/INST]"
81
+ B_FUNC, E_FUNC = "<FUNCTIONS>", "</FUNCTIONS>\n\n"
82
+
83
+ # Define the function metadata
84
+ function_metadata = {
85
+ "function": "search_bing",
86
+ "description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.",
87
+ "arguments": [
88
+ {
89
+ "name": "query",
90
+ "type": "string",
91
+ "description": "The search query string"
92
+ }
93
+ ]
94
+ }
95
+
96
+ # Define the user prompt
97
+ user_prompt = 'Search for the latest news on AI.'
98
+
99
+ # Format the function list and prompt
100
+ function_list = json.dumps(function_metadata, indent=4)
101
+ prompt = f"{B_FUNC}{function_list.strip()}{E_FUNC}{B_INST} {user_prompt.strip()} {E_INST}\n\n"
102
+
103
+ # Define the API endpoint
104
+ url = "http:/localhost:8080/completion"
105
+
106
+ # Send the POST request to the API server
107
+ response = requests.post(url, json={"prompt": prompt})
108
+
109
+ # Print the response
110
+ print(response.json())
111
+ ```
112
+
113
  ## Syntax
114
 
115
  ### Prompt Templates