AI Copilots - Tools
Tools allow AI to make actions, modify your application state, interact with your front-end, and render custom components within your AI chat. Use tools to extend the capabilities of AI Copilots beyond simple text, allowing autonomous and human-in-the-loop interactions.
Tool use cases
Tools can be used to create various different interactions inside of your AI chat, such as:
- Actions: Autonomously perform actions like editing documents, redirecting users, sending emails.
- Custom components: Render custom React components like forms, graphs, videos, callouts.
- Query actions: AI can query your app, search documents, find pages, check invoices.
- Human-in-the-loop actions: Show confirm/deny buttons before taking destructive actions.
- AI presence: Tool results can be streamed in, allowing AI to show live updates in your app.
How tools work
You can define a list of tools in your application, and your AI can choose to
use them whenever it decides they’re needed. Within each tool you can set
certain parameters which AI will fill in for you. For example, a weather tool
may have a location
parameter, and AI may enter "Paris"
as the value. Here’s
an example of a tool call interaction:
In your weather tool,
location
is defined as astring
User asks about the weather in Paris
AI calls the weather tool with
Paris
as thelocation
You write code to fetch the weather for the
location
AI answers the user
When writing your system prompt you can suggest when certain tools should be used, helping AI respond as you like. This is just an example of a simple tool, but below we’ll detail how to create more complex tools that have confirm/deny dialogs, render custom components, query data, and more.
Defining tools
You can define a tool with
defineAiTool
and
RegisterAiTool
. First,
you first need to give your tool a unique name, and a description, which helps
AI understand when to call it. You can place the component anywhere in your app.
For AI to use your tools intelligently, parameters must be defined, which AI
will fill in for you. Tools use
JSON schema
to define these. For example, you can define location
parameter as a string
.
To add functionality to your tool, a combination of execute
and render
functions are used.
In each of the following sections, different ways to implement execute
and
render
are detailed.
Actions
If you’d like your AI to perform an action when the tool is called, you can use
execute
to define what should happen. The arguments passed to execute
are
the parameters defined in your tool, filled in by AI. After the tool has run,
return any data
you’d like to pass back to AI.
After running execute
, AI will read the data
object, and choose how to
respond. Additionally, you can define a description
to pass back to AI. This
is a way to inform AI what has just taken place, so it can understand the
context of the result, and what it should do next. This text will never be shown
to the user.
Display a loading message
You can easily display a loading message while an action takes place using
render
and AiTool
. You can
also choose to display a message after the action has finished, as in the
example below.
AiTool
isn’t required here,
as you can return any JSX, but it’s an easy way to match the styling of the
default chat. Returning null
will display nothing.
Combine actions with front-end knowledge
You can combine actions with front-end knowledge to create an AI assistant that can take actions. For example, say you have a document on the current page. You can use knowledge to pass the document’s text to the AI, then create a tool that allows AI to edit the document.
Custom components
You can use tools to display custom components inside the chat with the render
function. These don’t have to be simple components, but can be complex, like
forms, graphs, videos, callouts. When displaying a simple component, include an
execute
function, even if it’s empty, otherwise the chat will assume it’s a
human-in-the-loop action.
You can go further than this, and allow AI to
create parameters which you can use in your custom
component, for example x
and y
values on a graph.
AI will most likely write a response after using your tool, but you can prompt
the AI to not respond by adding a description
to execute
.
Fetching data for custom components
You can take custom components a step further by combining them with
actions, and then showing the results inside the custom component.
The result
property contains the data returned from the action.
args
contains the arguments passed to the tool from AI, and result
contains
the data returned from the tool.
Query actions
A helpful way to use tools is to allow AI to query data from your application, such as documents, pages, or other data sources. If your application already contains a search function, you can easily plug it into your tool to create a powerful AI assistant. For example, this tool can search through documents by title, folder, and category.
Since the query is happening on the front-end, you don’t need to implement separate authentication for your AI tool—it can leverage the same APIs that your users are already authorized to access.
Human-in-the-loop actions
Human-in-the-loop actions allow the user to confirm or deny an action before it’s executed. This is particularly useful when it comes to destructive and stateful actions, such as deleting a document, or sending an email. Confirmable actions like these will freeze the chat until the user responds, either by confirming or cancelling the action.
User asks to delete a document
AI calls the delete document tool, “Confirm” and “deny” buttons are displayed
The chat waits for the user to respond.
The user clicks “Confirm”
The document is deleted.
The chat unfreezes and is ready to continue
To create confirmable actions, you must skip using the execute
function, and
instead move your logic to render
, as we detail below. This will always freeze
the chat until the user responds.
Default confirmation component
The easiest way to create confirmable actions, is to return the ready-made
AiTool.Confirmation
component in render
. The confirm
and cancel
callbacks work very similarly
to execute
, and are triggered when the user clicks “Confirm” or “Cancel”.
In the cancel
callback it’s important to let the AI know that the user
cancelled the action, otherwise it may assume the action failed and try to run
it again.
Building a custom confirmation component
By utilizing the respond
argument in render
, and the different stages of a
tool’s lifecycle, you can build a fully custom confirmation component. These are
the different stages:
receiving
- Displayed when AI is streaming in the parameters.executing
- Displayed when the chat is frozen, and is waiting for a response.executed
- Displayed after a response has been recorded.
Here’s how to leverage the different stages to create a custom “send email”
tool—note how respond
is used similarly to execute
.
A tool’s stages never reset, which means that once an email has been sent, the
tool will remain in the executed
stage showing “You sent an email to…”, even
after refreshing the page. This enables stateful interaction.
AI presence & streaming
You can stream in tool results as they arrive, allowing you to show live updates and AI presence in your application. An example would be a tool that generates documents—you can display the partially generated document as each chunk arrives, for example:
To show this inside the tool, you can use render
. While you’d normally access
AI-generated arguments via args
, but to access streaming results, you can use
partialArgs
to get the whole stream up to this point. This all occurs during
the receiving
stage, and as each chunk is received, render
will re-render
and update the UI.
RegisterAiTool
allows
you to stream in strings, objects, and arrays, and you can use them in render
as they arrive.
Streaming into a document with AI presence
To stream results outside of the chat window, and show AI presence, you can call
functions inside render
. In this case, we’re updating a document outside of
the chat window, and inside the chat, we’re showing a simple
AiTool
message.
Advanced JSON schema
Up to this point, we’ve only covered generating objects and strings with AI, but JSON schema allows for more complex data types, such as numbers, arrays, enums, as well as constraints, and hints for AI.
Updating tools
After a tool has been used in the chat, its AI-filled parameters are permanently set. This leaves us with a problem—if we remove or change the tool’s parameters, old versions of the tool will display an empty space in the chat (or in development mode, an error box).
Care needs to be taken when creating a new version of each tool if you wish for old chats to display fully working components. For this reason, we recommend versioning your tools, and disabling the old tool until you’re sure the old tools aren’t needed anymore. This will ensure the tool won’t be used in new chats, but the old component will still render correctly.