
videocam_off
This livestream is currently offline
Check back later when the stream goes live
Using Azure OpenAI's LLM Intelligence to refine your codes and improve your efficiency.
Show More Show Less View Video Transcript
0:00
So, coming to Lab 7, this lab is about using your DPD engine to help you with either debugging
0:12
your code or providing unit tests for the function that you have written
0:17
So, in my Lab 7 folder, when you obviously clone the GitHub repository, you can see I
0:23
have three files in my Lab 7 folder. One is called execute.py
0:28
The second one is called factorial.py and the third one is called as function.py
0:34
So, right over in my factorial.py file, I have written sort of a small code, very small
0:43
code, wherein I have written this code to print the factorial of a number and I have
0:52
deliberately made an error with respect to the indentation of the function and obviously
0:59
because we want our GPT engine to dissolve this error, I have made this particular indentation error
1:07
So, when I pass in my code into my GPT engine, I want my GPT engine to remove this indentation
1:16
error and provide me the reason as well. Navigating over to the function.py file, I have written sort of an absolute square
1:26
function which returns the value, which takes in two parameters, namely num1 and num2, two
1:36
formal parameters and then it returns the square of the absolute difference between
1:42
both of these numbers. Now, using this function, I will pass this function into my GPT engine prompt and then
1:50
will tell my GPT engine to provide unit test for this function to see whether or not this
1:58
function is performing the way I want it to perform. Okay, so coming to the execute.py file, this is the main code that we will be using to
2:12
call our GPT engine and then fetch the according responses from our GPT engine
2:20
So, in the first part of my code, I have imported some important libraries and utilities, namely
2:26
OpenAI, OS, JSON requests and Azure OpenAI from the OpenAI package. This part of the code is for setting the important configurations and credentials that we will
2:39
be using to, first of all, create an Azure OpenAI client and then calling our Azure OpenAI
2:45
GPT engine or model. This is a function wherein we take in three parameters, namely OpenAI key, OpenAI endpoint
2:56
URL and the user input. That is the prompt and then use all of these three parameters to call, to first of all
3:04
create an Azure OpenAI client and then call our GPT engine from our Azure OpenAI client
3:12
through the help of or with the help of chat completions API right over here
3:18
And then obviously, we're going to print the final response content as well
3:24
And yeah, I have this part of the code as well, wherein I take input as one or two in
3:32
the form of a variable called num, enter one to debug code and two for unit tests to
3:36
your code. If num is equal to one, then I am going to open my factorial.py file with an encoding
3:43
of UTF-8. And then I'm going to pass this file as a prompt, which reads as, please debug this
3:50
code in Python and the file content as well. And then I'm going to call this response function, which I defined right over here
4:03
And similarly, if I enter two, I'm going to open my function.py file, I'm going to read
4:08
it and pass it in a prompt as well, which says, please provide unit tests for the following
4:14
code in Python and the file as well. And similarly, I'm going to call the response function at the end of this else statement
4:23
as well. Okay, so let's get started. First of all, we have to set this important, we have to set all of these important configurations
4:34
and credentials. So first of all, create our Azure OpenAI client, which will enable us to make calls to our
4:42
GPT engine. So right over here, I have my Azure OpenAI Studio open
4:49
So yeah, I have first of all, let me show you what all models have deployed
4:54
So deployed HelloAI model and please work model. Model name, I mean, the deployment name is HelloAI and the model is GPT35 Turbo 16K version
5:05
0.6.1.3 version with capacity of 1000 tokens per minute. And please work deployment name with model GPT35 Turbo 0.6.1.3 version with a token capacity
5:16
of 1000 tokens per minute. I'll be using this HelloAI deployment name with the model of GPT35 Turbo 16K 0.6.1.3 version
5:27
So first of all, navigate to your chat playground and select the deployment that we'll be using
5:35
We'll be using HelloAI in my case. Open this view code section and from this, copy your endpoint URL and key as well
5:46
So copy this key, paste it over here
5:56
And then we are also going to paste our endpoint URL as well
6:04
So copy this endpoint URL and paste it over here. Model name is HelloAI. Yep. HelloAI
6:16
I guess we are pretty much done with making all the important changes and setting all
6:22
the important configurations and credentials for this code. Now, the only part that is left is to execute our execute.py file
6:31
So open this file in an integrated terminal and type in python execute.py
6:43
It says, let's enter one first of all to debug our code
6:52
So it gave me an error. Let's see what error was it
7:00
Request URL is missing an HTTP or HTTPS protocol
7:22
Let's copy this again and paste it again
7:42
No problem. Yeah
7:55
So I guess there was something wrong with me copying and pasting the endpoint URL
8:01
So I mean, things like these happen. So I'm going to keep this part of the video in the video itself
8:06
So you know, to let you guys know that errors like these do happen
8:11
All you can, all you have to do is, you know, just read the error from the integrated terminal
8:17
and sort of run your mind around it. And then, yeah, obviously do some sort of brainstorming and I'm sure within five, maybe
8:28
not even five minutes, you are within five minutes, your error will get resolved
8:33
So coming to the response that is, you know, sent by our GPT engine, it says the input
8:43
function returns a string. So you need to convert the input to an integer before using it in the while loop. Okay
8:49
Here's the corrected code. Yeah. So it removed the indentation error. Exactly
8:55
So that means it's, it's running well. Now let us, you know, tell the engine to give us the unit test as well
9:06
Python execute.py. Now let us type in two here
9:21
So just expanding this terminal a bit. So it gave me some important unit tests as well
9:29
With the number three, five combination numbers, minus two, minus six, zero, zero, thousand
9:36
and another very long number, 3.5, 1.2, four, negative two and stuff like that
9:45
And it also says these tests can cover various scenarios, so just positive numbers, negative
9:49
numbers, zero, large numbers, decimals and mixed numbers. You can add or modify the test cases as per your requirements. Yeah
9:58
So I guess we're pretty much done with this tutorial or with this lab as well
10:03
So the aim of this lab was to give you a brief idea of how you can, you know, refine your
10:12
code or maybe improve the efficiency of your coding environment with the help of your GPT
10:22
engine as well. So it is, you know, your GPT engine is again a very powerful tool which you can use to
10:29
enhance your efficiency. And just this lab was aimed at just giving you a glimpse of what your GPT engine is capable
10:37
of when it comes to increasing your efficiency. Yeah. So we're pretty much done with this tutorial. Yeah
#Programming
#Development Tools
#Scripting Languages
#Computer Education


