- Published on
How to Use ChatGPT (GPT 3/4) Custom Function Calling (Tools)
- Authors
- Name
- Giel Oomen
Introduction
This blog is a guide on how to set up custom tools for the OpenAI GPT API's. These tools were previously called custom functions, while most syntax is the same, this blog focuses on the new tool syntax. Using tools allows developers to create applications that leverage their own, often protected data with GPT-4 level LLM quality.
Each code block in this blog will be a code cell of a Jupyter notebook, making it easy to recreate or use as a basis for your own projects.
Create Client
Make sure your OpenAI key is added to your systems environment variable as OPENAI_API_KEY = {YOUR_API_KEY}
, this is where client = OpenAI()
will take your key from.
from openai import OpenAI
import json
client = OpenAI()
Normal Chat Completion Request
Test the client to verify your API key is working and the API is responding as expected. The following code uses model = "gpt-4-1106-preview"
, which is the 126k context GPT-4-Turbo model. This can ofcourse be changed to your preferences.
model = "gpt-4-1106-preview"
question = "Say this is a test"
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": question}],
)
response.choices[0].message.content
Running the code should print 'This is a test.'
.
Define Functions (Tools)
In this step the functions will be defined. As per the OpenAI API Reference, the only tool type currently supported are functions.
First, define the actual functions. In this example I use the get_current_weather()
function from an OpenAI example, with the fahrenheit_to_celsius()
add for a clearer example of tool chaining.
# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
"""Get the current weather in a given location"""
if "tokyo" in location.lower():
return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit})
elif "san francisco" in location.lower():
return json.dumps({"location": "San Francisco", "temperature": "72", "unit": unit})
elif "paris" in location.lower():
return json.dumps({"location": "Paris", "temperature": "22", "unit": unit})
else:
return json.dumps({"location": location, "temperature": "unknown"})
def fahrenheit_to_celsius(fahrenheit):
"""Convert fahrenheit to celsius"""
return (float(fahrenheit) - 32) * 5 / 9
Then, define the functions again as tools that can be passed to the GPT-4 API. Note that the descriptions are also passed as context to API requests so make sure they are clear.
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
}
},
"required": ["location"],
},
},
},
{
"type": "function",
"function": {
"name": "fahrenheit_to_celsius",
"description": "Convert fahrenheit to celsius",
"parameters": {
"type": "object",
"properties": {"fahrenheit": {"type": "number"}},
"required": ["fahrenheit"],
},
},
},
]
GPT Function Call Response
Now, let's test a new prompt with the functions passed as tools for the GPT API to access to verify the API tries to use tools instead of answering based on its own knowlegde.
messages = [{"role": "user", "content": "What's the weather like in San Francisco?"}]
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
)
print(response.choices[0])
This code block should print the following choice information:
Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_2Gigc44AReLyTVpVQYiBAUpx', function=Function(arguments='{"location":"San Francisco, CA"}', name='get_current_weather'), type='function')]))
It shows that the finish_reason is 'tool_calls' and it has information about which function to call with which parameters, this information is used to then pass the args to the right function.
OpenAI's Example: Multiple Function Calls After Single Prompt
OpenAI has an example in which they request weather data for three cities based on the get_current_weather() function call. This works because the GPT API knows it has to do three functions calls based on the initial message. However, this does not work when tools should be chained, like first requesting the temperature and then converting that temperature from Fahrenheit to Celsius.
Below is a slightly modified version of the example from OpenAI:
def run_conversation(tools):
# Step 1: send the conversation and available functions to the model
messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}]
# messages = [
# {"role": "user", "content": "What's the weather like in San Francisco, in degrees celsius?"}
# ]
tools = tools
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
tool_choice="auto", # auto is default, but we'll be explicit
)
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
# Step 2: check if the model wanted to call a function
if tool_calls:
# Step 3: call the function
# Note: the JSON response may not always be valid; be sure to handle errors
available_functions = {
"get_current_weather": get_current_weather,
"fahrenheit_to_celsius": fahrenheit_to_celsius,
}
messages.append(response_message) # extend conversation with assistant's reply
# Step 4: send the info for each function call and function response to the model
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = available_functions[function_name]
print(f"Calling {function_name} with {tool_call.function.arguments}")
function_args = json.loads(tool_call.function.arguments)
# get the right response based on which function was called
function_response = function_to_call(**function_args)
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
}
) # extend conversation with function response
second_response = client.chat.completions.create(
model=model,
messages=messages,
) # get a new response from the model where it can see the function response
return second_response
response = run_conversation(tools)
print(response.choices[0].message.content)
I have added two example messages, the first one with the weather request for San Francisco, Tokyo and Paris returns the following:
Calling get_current_weather with {"location": "San Francisco, CA"}
Calling get_current_weather with {"location": "Tokyo"}
Calling get_current_weather with {"location": "Paris"}
Here are the current weather conditions:
- **San Francisco:** The temperature is 72°F.
- **Tokyo:** The temperature is 10°F (Be aware this might be an error since it's unusually cold for Tokyo).
- **Paris:** The temperature is 22°F.
Please note, the temperature provided for Tokyo seems abnormally low; it's possible there was an error in reporting. It might be a good idea to check a reliable local weather source for the most accurate and up-to-date information.
The second example message requests the temperature in San Francisco, in degrees celsius. The output of the model is the following:
Calling get_current_weather with {"location":"San Francisco, CA"}
In San Francisco, the current temperature is 72 degrees Fahrenheit, which is equivalent to approximately 22.2 degrees Celsius.
As we can see, there is no print statement that says the convert fahrenheit function was called, because it wasn't. In order for the GPT API to know it has to call that function, we would first need to return the get_current_weather() function so the GPT API has yet to be transformed.
NOTE: It does return an approximate degrees celsius value, this works because the GPT API knows how to do the conversion by itself.
Function Call Chaining
In order to actually use both functions in a chained matted, I wrote the following code:
def run_conversation(tools):
messages = [
{"role": "user", "content": "What's the weather like in San Francisco, in degrees celsius?"}
]
tools = tools
available_functions = {
"get_current_weather": get_current_weather,
"fahrenheit_to_celsius": fahrenheit_to_celsius,
}
while True:
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
tool_choice="auto"
)
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
if not tool_calls:
break # Exit loop if no tool calls were made
messages.append(response_message) # Extend conversation with assistant's reply
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = available_functions[function_name]
print(f"Calling {function_name} with {tool_call.function.arguments}")
function_args = json.loads(tool_call.function.arguments)
function_response = function_to_call(**function_args)
# Extend conversation with function response
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
})
# No more tool calls to make, return the last response
return response
print(run_conversation(tools))
The output of this function is the following:
Calling get_current_weather with {"location": "San Francisco, CA"}
Calling fahrenheit_to_celsius with {"fahrenheit":72}
ChatCompletion(id='chatcmpl-8gzTJQOgTjwn1QMMIE6QbZ65asD25', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The current weather in San Francisco, CA is approximately 22.2 degrees Celsius.', role='assistant', function_call=None, tool_calls=None))], created=1705256765, model='gpt-4-1106-preview', object='chat.completion', system_fingerprint='fp_168383a679', usage=CompletionUsage(completion_tokens=18, prompt_tokens=186, total_tokens=204))
We can now verify that it first used the get_current_weather function to obtain the value in Fahrenheit, and then it used the fahrenheit_to_celsius function to convert the value to degrees Celsius.
Chaining tool calls like this allows for a huge range of new applications.
Notes
- Functions (tools) are being passed as context on the background, adding to the input token length of each api call.
- Defining good function descriptions is necessary for GPT to utilize the desired functions.
- Chaining requests in a while loop should be done with caution as it could keep sending API requests infinitely.