Nisarg Patel commited on
Commit
40e4d64
·
unverified ·
2 Parent(s): d4f7ea8 d8db8b5

Merge pull request #2 from nisargvp/feature-hello-world

Browse files
app/Programatically_Accessing_OpenAI_Endpoints_with_Python.ipynb ADDED
@@ -0,0 +1,334 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "UIuhLOcmCdyR"
7
+ },
8
+ "source": [
9
+ "### Using the OpenAI Library to Programmatically Access GPT-3.5-turbo!\n",
10
+ "\n",
11
+ "This notebook was authored by [Chris Alexiuk](https://www.linkedin.com/in/csalexiuk/)"
12
+ ]
13
+ },
14
+ {
15
+ "cell_type": "code",
16
+ "execution_count": 1,
17
+ "metadata": {
18
+ "colab": {
19
+ "base_uri": "https://localhost:8080/"
20
+ },
21
+ "id": "3qCKaH6vD-jZ",
22
+ "outputId": "b9898a5f-36a7-4d8d-d760-310187cf31fa"
23
+ },
24
+ "outputs": [],
25
+ "source": [
26
+ "# !pip install openai cohere tiktoken -qU"
27
+ ]
28
+ },
29
+ {
30
+ "cell_type": "markdown",
31
+ "metadata": {
32
+ "id": "XxS23_1zpYid"
33
+ },
34
+ "source": [
35
+ "### OpenAI API Key"
36
+ ]
37
+ },
38
+ {
39
+ "cell_type": "code",
40
+ "execution_count": 2,
41
+ "metadata": {
42
+ "colab": {
43
+ "base_uri": "https://localhost:8080/"
44
+ },
45
+ "id": "tpnsDCfEbsqS",
46
+ "outputId": "1011f74e-624b-4800-89ff-c83152d34c1f"
47
+ },
48
+ "outputs": [],
49
+ "source": [
50
+ "import os\n",
51
+ "import openai\n",
52
+ "import getpass\n",
53
+ "\n",
54
+ "# set the OPENAI_API_KEY environment variable\n",
55
+ "openai.api_key = getpass.getpass(\"OpenAI API Key:\")"
56
+ ]
57
+ },
58
+ {
59
+ "cell_type": "markdown",
60
+ "metadata": {
61
+ "id": "YHD49z39pbIS"
62
+ },
63
+ "source": [
64
+ "### Our First Prompt\n",
65
+ "\n",
66
+ "You can reference OpenAI's [documentation](https://platform.openai.com/docs/api-reference/authentication?lang=python) if you get stuck!\n",
67
+ "\n",
68
+ "Let's create a `ChatCompletion` model to kick things off!\n",
69
+ "\n",
70
+ "There are three \"roles\" available to use:\n",
71
+ "\n",
72
+ "- `system`\n",
73
+ "- `assistant`\n",
74
+ "- `user`\n",
75
+ "\n",
76
+ "OpenAI provides some context for these roles [here](https://help.openai.com/en/articles/7042661-chatgpt-api-transition-guide)\n",
77
+ "\n",
78
+ "Let's just stick to the `user` role for now and send our first message to the endpoint!\n",
79
+ "\n",
80
+ "If we check the documentation, we'll see that it expects it in a list of prompt objects - so we'll be sure to do that!"
81
+ ]
82
+ },
83
+ {
84
+ "cell_type": "code",
85
+ "execution_count": 3,
86
+ "metadata": {
87
+ "id": "g0AL4VTwyWLN"
88
+ },
89
+ "outputs": [
90
+ {
91
+ "data": {
92
+ "text/plain": [
93
+ "ChatCompletion(id='chatcmpl-9D4ZMhNvYSJaf3Rx8cDkyW2ypwPog', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='A woodchuck could chuck as much wood as a woodchuck would chuck if a woodchuck could chuck wood.', role='assistant', function_call=None, tool_calls=None))], created=1712902856, model='gpt-3.5-turbo-0125', object='chat.completion', system_fingerprint='fp_c2295e73ad', usage=CompletionUsage(completion_tokens=25, prompt_tokens=25, total_tokens=50))"
94
+ ]
95
+ },
96
+ "execution_count": 3,
97
+ "metadata": {},
98
+ "output_type": "execute_result"
99
+ }
100
+ ],
101
+ "source": [
102
+ "from openai import OpenAI\n",
103
+ "\n",
104
+ "client = OpenAI(api_key=openai.api_key)\n",
105
+ "\n",
106
+ "YOUR_PROMPT = \"How much wood could a woodchuck chuck if a woodchuck could chuck wood?\"\n",
107
+ "\n",
108
+ "client.chat.completions.create(\n",
109
+ " model=\"gpt-3.5-turbo\",\n",
110
+ " messages=[{\"role\" : \"user\", \"content\" : YOUR_PROMPT}]\n",
111
+ ")"
112
+ ]
113
+ },
114
+ {
115
+ "cell_type": "markdown",
116
+ "metadata": {
117
+ "id": "FD_Z64hGy6RV"
118
+ },
119
+ "source": [
120
+ "As you can see, the prompt comes back with a tonne of information that we can use when we're building our applications!\n",
121
+ "\n",
122
+ "Let's focus on extending that a bit, and incorporate a `system` message as well!\n",
123
+ "\n",
124
+ "Also, we'll be building some helper functions to display the prompts with Markdown!\n",
125
+ "\n",
126
+ "We'll also wrap our prompts so we don't have to keep making dictionaries for them!"
127
+ ]
128
+ },
129
+ {
130
+ "cell_type": "code",
131
+ "execution_count": 4,
132
+ "metadata": {
133
+ "id": "QSQMFfWKbsqT"
134
+ },
135
+ "outputs": [],
136
+ "source": [
137
+ "from IPython.display import display, Markdown\n",
138
+ "\n",
139
+ "def get_response(messages: str, model: str = \"gpt-3.5-turbo\") -> str:\n",
140
+ " return client.chat.completions.create(\n",
141
+ " model=model,\n",
142
+ " messages=messages\n",
143
+ " )\n",
144
+ "\n",
145
+ "def wrap_prompt(message: str, role: str) -> dict:\n",
146
+ " return {\"role\": role, \"content\": message}\n",
147
+ "\n",
148
+ "def m_print(message: str) -> str:\n",
149
+ " display(Markdown(message.choices[0].message.content))"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "code",
154
+ "execution_count": 5,
155
+ "metadata": {
156
+ "colab": {
157
+ "base_uri": "https://localhost:8080/",
158
+ "height": 348
159
+ },
160
+ "id": "7aEd_p1sbsqT",
161
+ "outputId": "d32cf1ff-d4aa-48a9-ebf5-f670c1750110"
162
+ },
163
+ "outputs": [
164
+ {
165
+ "data": {
166
+ "text/markdown": [
167
+ "Sure! Here's a Python function that calculates the Nth Fibonacci number using recursion:\n",
168
+ "\n",
169
+ "```python\n",
170
+ "def fibonacci(n):\n",
171
+ " if n <= 0:\n",
172
+ " return \"Invalid input. Please enter a positive integer.\"\n",
173
+ " elif n == 1:\n",
174
+ " return 0\n",
175
+ " elif n == 2:\n",
176
+ " return 1\n",
177
+ " else:\n",
178
+ " return fibonacci(n-1) + fibonacci(n-2)\n",
179
+ "\n",
180
+ "n = 10\n",
181
+ "result = fibonacci(n)\n",
182
+ "print(f\"The {n}th Fibonacci number is: {result}\")\n",
183
+ "```\n",
184
+ "\n",
185
+ "You can replace the value of `n` with any positive integer to get the corresponding Fibonacci number."
186
+ ],
187
+ "text/plain": [
188
+ "<IPython.core.display.Markdown object>"
189
+ ]
190
+ },
191
+ "metadata": {},
192
+ "output_type": "display_data"
193
+ }
194
+ ],
195
+ "source": [
196
+ "system_prompt = wrap_prompt(\"You are a Python Programmer.\", \"system\")\n",
197
+ "user_prompt = wrap_prompt(\"Can you write me a function in Python that calculates the Nth Fibonacci number?\", \"user\")\n",
198
+ "\n",
199
+ "openai_response = get_response([system_prompt, user_prompt])\n",
200
+ "m_print(openai_response)"
201
+ ]
202
+ },
203
+ {
204
+ "cell_type": "code",
205
+ "execution_count": 6,
206
+ "metadata": {
207
+ "colab": {
208
+ "base_uri": "https://localhost:8080/"
209
+ },
210
+ "id": "N7EproZ5ztKt",
211
+ "outputId": "a7ca3b15-87cf-4c27-8173-6534d9f70421"
212
+ },
213
+ "outputs": [
214
+ {
215
+ "name": "stdout",
216
+ "output_type": "stream",
217
+ "text": [
218
+ "ChatCompletion(id='chatcmpl-9D4cOPbhi0rrPQGQC1bDUrDBUNTGs', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Sure! Here\\'s a Python function that calculates the Nth Fibonacci number using recursion:\\n\\n```python\\ndef fibonacci(n):\\n if n <= 0:\\n return \"Invalid input. Please enter a positive integer.\"\\n elif n == 1:\\n return 0\\n elif n == 2:\\n return 1\\n else:\\n return fibonacci(n-1) + fibonacci(n-2)\\n\\nn = 10\\nresult = fibonacci(n)\\nprint(f\"The {n}th Fibonacci number is: {result}\")\\n```\\n\\nYou can replace the value of `n` with any positive integer to get the corresponding Fibonacci number.', role='assistant', function_call=None, tool_calls=None))], created=1712903044, model='gpt-3.5-turbo-0125', object='chat.completion', system_fingerprint='fp_b28b39ffa8', usage=CompletionUsage(completion_tokens=129, prompt_tokens=33, total_tokens=162))\n"
219
+ ]
220
+ }
221
+ ],
222
+ "source": [
223
+ "print(openai_response)"
224
+ ]
225
+ },
226
+ {
227
+ "cell_type": "markdown",
228
+ "metadata": {
229
+ "id": "YdhHoeo5zxbl"
230
+ },
231
+ "source": [
232
+ "You can add the `assistant` role to send messages as-if they're from the model itself - which can help us do \"few-shot\" prompt engineering!\n",
233
+ "\n",
234
+ "That's where we show the LLM a few examples of the output we'd like to see to help guide the model to our desired outputs!\n",
235
+ "\n",
236
+ "Let's see it in action!"
237
+ ]
238
+ },
239
+ {
240
+ "cell_type": "code",
241
+ "execution_count": 7,
242
+ "metadata": {
243
+ "id": "DLCT0o5i0AEw"
244
+ },
245
+ "outputs": [],
246
+ "source": [
247
+ "prompt_list = [\n",
248
+ " wrap_prompt(\"You are an expert food critic, and also a pirate.\", \"system\"),\n",
249
+ " wrap_prompt(\"Hi, are apples any good?\", \"user\"),\n",
250
+ " wrap_prompt(\"Ahoy matey. Apples be the finest of the edible treasures.\", \"assistant\"),\n",
251
+ " wrap_prompt(\"Hello there, is the combination of cheese and plums a good combination?\", \"user\"),\n",
252
+ " wrap_prompt(\"Arrrrrr. That be a dish only land-lubbers could enjoy. If that grub be on my ship, I'd toss it overboard!\", \"assistant\")\n",
253
+ "]"
254
+ ]
255
+ },
256
+ {
257
+ "cell_type": "markdown",
258
+ "metadata": {
259
+ "id": "i1k3xWIP0x5u"
260
+ },
261
+ "source": [
262
+ "Now we can append our *actual* prompt to the list!"
263
+ ]
264
+ },
265
+ {
266
+ "cell_type": "code",
267
+ "execution_count": 8,
268
+ "metadata": {
269
+ "colab": {
270
+ "base_uri": "https://localhost:8080/",
271
+ "height": 64
272
+ },
273
+ "id": "CFeNREBW03G_",
274
+ "outputId": "4ff66e0f-b38d-486d-d125-dcb8b876b150"
275
+ },
276
+ "outputs": [
277
+ {
278
+ "data": {
279
+ "text/markdown": [
280
+ "Aye, pears be a fine addition to a salad, adding a sweet and juicy element to balance the savory and crunchy components. You won't be walkin' the plank for addin' them to your salad, that be for sure!"
281
+ ],
282
+ "text/plain": [
283
+ "<IPython.core.display.Markdown object>"
284
+ ]
285
+ },
286
+ "metadata": {},
287
+ "output_type": "display_data"
288
+ }
289
+ ],
290
+ "source": [
291
+ "prompt_list.append(wrap_prompt(\"Are pears a good choice for a salad?\", \"user\"))\n",
292
+ "\n",
293
+ "openai_response = get_response(prompt_list)\n",
294
+ "m_print(openai_response)"
295
+ ]
296
+ },
297
+ {
298
+ "cell_type": "markdown",
299
+ "metadata": {
300
+ "id": "ZJ2IuNHT1E8r"
301
+ },
302
+ "source": [
303
+ "Feel free to send some prompts and try out different things!\n",
304
+ "\n",
305
+ "Let us know if you find anything interesting!"
306
+ ]
307
+ }
308
+ ],
309
+ "metadata": {
310
+ "colab": {
311
+ "provenance": []
312
+ },
313
+ "kernelspec": {
314
+ "display_name": "open_ai",
315
+ "language": "python",
316
+ "name": "python3"
317
+ },
318
+ "language_info": {
319
+ "codemirror_mode": {
320
+ "name": "ipython",
321
+ "version": 3
322
+ },
323
+ "file_extension": ".py",
324
+ "mimetype": "text/x-python",
325
+ "name": "python",
326
+ "nbconvert_exporter": "python",
327
+ "pygments_lexer": "ipython3",
328
+ "version": "3.11.8"
329
+ },
330
+ "orig_nbformat": 4
331
+ },
332
+ "nbformat": 4,
333
+ "nbformat_minor": 0
334
+ }