How to use function calling
Scaleway's Chat Completions API supports function calling as introduced by OpenAI.
What is function calling?
Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the tool call to be done as structured data, typically in JSON format. While errors can occur, custom parsers or tools like LlamaIndex and LangChain can help ensure valid results.
Before you start
To complete the actions presented below, you must have:
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- A valid API key for API authentication
- Python 3.7+ installed on your system
Supported models
All the chat models hosted by Scaleway support function calling.
Understanding function calling
Function calling consists of three main components:
- Tool definitions: JSON schemas that describe available functions and their parameters
- Tool selection: Automatic or manual selection of appropriate functions based on user queries
- Tool execution: Processing function calls and handling their responses
The workflow typically follows these steps:
- Define available tools using JSON schema
- Send system and user query along with tool definitions
- Process model's function selection
- Execute selected functions
- Return results to model for final response
Code examples
We will demonstrate function calling using a flight scheduling system that allows users to check available flights between European airports.
Basic function definition
First, let's define our flight schedule function and its schema:
from openai import OpenAI
import json
def get_flight_schedule(departure_airport: str, destination_airport: str, departure_date: str) -> dict:
"""
Retrieves flight schedules between two European airports on a specific date.
"""
# Mock flight schedule data
flights = {
"CDG-LHR-2024-11-01": [
{"flight_number": "AF123", "airline": "Air France", "departure_time": "08:00", "arrival_time": "09:00"},
{"flight_number": "BA456", "airline": "British Airways", "departure_time": "10:00", "arrival_time": "11:00"},
{"flight_number": "LH789", "airline": "Lufthansa", "departure_time": "14:00", "arrival_time": "15:00"}
],
"AMS-MUC-2024-11-01": [
{"flight_number": "KL101", "airline": "KLM", "departure_time": "07:30", "arrival_time": "09:00"},
{"flight_number": "LH202", "airline": "Lufthansa", "departure_time": "12:00", "arrival_time": "13:30"}
]
}
key = f"{departure_airport}-{destination_airport}-{departure_date}"
return flights.get(key, {"error": "No flights found for this route and date."})
# Define the tool specification
tools = [{
"type": "function",
"function": {
"name": "get_flight_schedule",
"description": "Get available flights between two European airports on a specific date",
"parameters": {
"type": "object",
"properties": {
"departure_airport": {
"type": "string",
"description": "The IATA code of the departure airport (e.g., CDG, LHR)"
},
"destination_airport": {
"type": "string",
"description": "The IATA code of the destination airport"
},
"departure_date": {
"type": "string",
"description": "The date of departure in YYYY-MM-DD format"
}
},
"required": ["departure_airport", "destination_airport", "departure_date"]
}
}
}]
Simple function call example
To implement a basic function call, add the following code:
# Initialize the OpenAI client
client = OpenAI(
base_url="https://api.scaleway.ai/v1",
api_key="<YOUR_API_KEY>"
)
# Create a simple query
messages = [
{
"role": "system",
"content": "You are a helpful flight assistant."
},
{
"role": "user",
"content": "What flights are available from CDG to LHR on November 1st, 2024?"
}
]
# Make the API call
response = client.chat.completions.create(
model="llama-3.1-70b-instruct",
messages=messages,
tools=tools,
tool_choice="auto"
)
print(response.choices[0].message.tool_calls)
As the model detects properly that a tool call is required to answer the question, the output should be a list of tool calls specifying function names and parameter properties:
[ChatCompletionMessageToolCall(id='chatcmpl-tool-81e63f4f496d429ba9ec6efcff6a86e1', function=Function(arguments='{"departure_airport": "CDG", "destination_airport": "LHR", "departure_date": "2024-11-01"}', name='get_flight_schedule'), type='function')]
Call the tool and provide a final answer
To provide the answer, or for more complex interactions, you will need to handle multiple turns of conversation:
# Process the tool call
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
# Execute the function
if tool_call.function.name == "get_flight_schedule":
function_args = json.loads(tool_call.function.arguments)
function_response = get_flight_schedule(**function_args)
# Add results to the conversation
messages.extend([
{
"role": "assistant",
"content": None,
"tool_calls": [tool_call]
},
{
"role": "tool",
"name": tool_call.function.name,
"content": json.dumps(function_response),
"tool_call_id": tool_call.id
}
])
# Get final response
final_response = client.chat.completions.create(
model="llama-3.1-70b-instruct",
messages=messages
)
print(final_response.choices[0].message.content)
Parallel function calling
In addition to one function call described above, you can also call multiple functions in a single turn. This section shows an example for how you can use parallel function calling.
Define the tools:
def open_floor_space(floor_number: int) -> bool:
"""Opens up the specified floor for party space by unlocking doors and moving furniture."""
print(f"Floor {floor_number} is now open party space!")
return True
def set_lobby_vibe(party_mode: bool) -> str:
"""Switches lobby screens and lighting to party mode."""
status = "party mode activated!" if party_mode else "back to business mode"
print(f"Lobby is now in {status}")
return "The lobby is ready to party!"
def prep_snack_station(activate: bool) -> bool:
"""Converts the cafeteria into a snack and drink station."""
print(f"Snack station is {'open and stocked!' if activate else 'closed.'}")
return True
Define the specifications:
tools = [
{
"type": "function",
"function": {
"name": "open_floor_space",
"description": "Opens up an entire floor for the party",
"parameters": {
"type": "object",
"properties": {
"floor_number": {
"type": "integer",
"description": "Which floor to open up"
}
},
"required": ["floor_number"]
}
}
},
{
"type": "function",
"function": {
"name": "set_lobby_vibe",
"description": "Transform lobby atmosphere into party mode",
"parameters": {
"type": "object",
"properties": {
"party_mode": {
"type": "boolean",
"description": "True for party, False for business"
}
},
"required": ["party_mode"]
}
}
},
{
"type": "function",
"function": {
"name": "prep_snack_station",
"description": "Set up the snack and drink station",
"parameters": {
"type": "object",
"properties": {
"activate": {
"type": "boolean",
"description": "True to open, False to close"
}
},
"required": ["activate"]
}
}
}
]
Next, call the model with proper instructions
system_prompt = """
You are an office party control assistant. When asked to transform the office into a party space, you should:
1. Open up a floor for the party
2. Transform the lobby into party mode
3. Set up the snack station
Make all these changes at once for an instant office party!
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Turn this office building into a party!"}
]
Best practices
When implementing function calling, follow these guidelines for optimal results:
-
Function design
- Keep function names clear and descriptive
- Limit the number of functions to 7 or fewer per conversation
- Use detailed parameter descriptions in your JSON schema
-
Parameter handling
- Always specify required parameters
- Use appropriate data types and validation
- Include example values in parameter descriptions
-
Error handling
- Implement robust error handling for function execution
- Return clear error messages that the model can interpret
- Handle edge cases gracefully
-
Performance optimization
- Set appropriate temperature values (lower for more precise function calls)
- Cache frequently accessed data when possible
- Minimize the number of turns in multi-turn conversations
Further resources
For more information about function calling and advanced implementations, refer to these resources:
Function calling significantly extends the capabilities of language models by allowing them to interact with external tools and APIs.