AI Copilots - Tools

Tools allow AI to make actions, modify your application state, interact with your front-end, and render custom components within your AI chat. Use tools to extend the capabilities of AI Copilots beyond simple text, allowing autonomous and human-in-the-loop interactions.

Tool use cases

Tools can be used to create various different interactions inside of your AI chat, such as:

  • Actions: Autonomously perform actions like editing documents, redirecting users, sending emails.
  • Custom components: Render custom React components like forms, graphs, videos, callouts.
  • Query actions: AI can query your app, search documents, find pages, check invoices.
  • Human-in-the-loop actions: Show confirm/deny buttons before taking destructive actions.
  • AI presence: Tool results can be streamed in, allowing AI to show live updates in your app.

How tools work

You can define a list of tools in your application, and your AI can choose to use them whenever it decides they’re needed. Within each tool you can set certain parameters which AI will fill in for you. For example, a weather tool may have a location parameter, and AI may enter "Paris" as the value. Here’s an example of a tool call interaction:

  1. In your weather tool, location is defined as a string

    { "location": { type: "string" } }
  2. User asks about the weather in Paris

    User: "What's the weather in Paris?"
  3. AI calls the weather tool with Paris as the location

    { "location": "Paris" }
  4. You write code to fetch the weather for the location

    execute: async ({ location }) => {  // { "temperature": 20, "condition": "sunny" };  const weather = await (location);  return { data: { weather }};}
  5. AI answers the user

    AI: "It's sunny in Paris, with a temperature of 20°C."

When writing your system prompt you can suggest when certain tools should be used, helping AI respond as you like. This is just an example of a simple tool, but below we’ll detail how to create more complex tools that have confirm/deny dialogs, render custom components, query data, and more.

Defining tools

You can define a tool with defineAiTool and RegisterAiTool. First, you first need to give your tool a unique name, and a description, which helps AI understand when to call it. You can place the component anywhere in your app.

import { RegisterAiTool } from "@liveblocks/react";import { defineAiTool } from "@liveblocks/client";import { AiChat } from "@liveblocks/react-ui";
function Chat() { return ( <> <AiChat chatId="my-chat-id" /> <RegisterAiTool name="weather-tool" tool={defineAiTool()({ description: "Get the weather for a location", // ... })} /> </> );}

For AI to use your tools intelligently, parameters must be defined, which AI will fill in for you. Tools use JSON schema to define these. For example, you can define location parameter as a string.

<RegisterAiTool  name="weather-tool"  tool={defineAiTool()({    description: "Get the weather for a location",    parameters: {      type: "object",      properties: {        location: { type: "string" },      },      required: ["location"],      additionalProperties: false,    },    // ...  })}/>

To add functionality to your tool, a combination of execute and render functions are used.

<RegisterAiTool  name="weather-tool"  tool={defineAiTool()({    description: "Get the weather for a location",    parameters: {      type: "object",      properties: {        location: { type: "string" },      },      required: ["location"],      additionalProperties: false,    },    execute: async ({ location }) => {      // ...    },    render: ({ stage, partialArgs, args, result, respond }) => {      // ...    },  })}/>

In each of the following sections, different ways to implement execute and render are detailed.

Actions

If you’d like your AI to perform an action when the tool is called, you can use execute to define what should happen. The arguments passed to execute are the parameters defined in your tool, filled in by AI. After the tool has run, return any data you’d like to pass back to AI.

<RegisterAiTool  name="weather-tool"  tool={defineAiTool()({    description: "Get the weather for a location",    parameters: {      type: "object",      properties: {        location: { type: "string" },      },      required: ["location"],      additionalProperties: false,    },    execute: async ({ location }) => {      const weather = await (location);      return { data: { weather } };    },  })}/>

After running execute, AI will read the data object, and choose how to respond. Additionally, you can define a description to pass back to AI. This is a way to inform AI what has just taken place, so it can understand the context of the result, and what it should do next. This text will never be shown to the user.

execute: async ({ location }) => {  const weather = await (location);  return {    data: { weather },    description: "You've just fetched the weather, share the temperature in °C."  };},

Display a loading message

You can easily display a loading message while an action takes place using render and AiTool. You can also choose to display a message after the action has finished, as in the example below.

import { AiChat, AiTool } from "@liveblocks/react-ui";import { RegisterAiTool } from "@liveblocks/react";
function Chat() { <> <AiChat chatId="my-chat-id" /> <RegisterAiTool name="weather-tool" tool={defineAiTool()({ description: "Get the weather for a location", parameters: { type: "object", properties: { location: { type: "string" }, }, required: ["location"], additionalProperties: false, }, execute: async ({ location }) => { const weather = await (location); return { data: { weather }, description: "You've just fetched the weather.", }; }, render: ({ stage }) => { // `execute` is still running if (stage !== "executed") { return <AiTool title="Fetching weather…" variant="minimal" />; }
// `execute` has finished return <AiTool title="Weather fetched" variant="minimal" />; }, })} /> </>;}

AiTool isn’t required here, as you can return any JSX, but it’s an easy way to match the styling of the default chat. Returning null will display nothing.

Combine actions with front-end knowledge

You can combine actions with front-end knowledge to create an AI assistant that can take actions. For example, say you have a document on the current page. You can use knowledge to pass the document’s text to the AI, then create a tool that allows AI to edit the document.

import { RegisterAiKnowledge } from "@liveblocks/react";import { AiChat } from "@liveblocks/react-ui";import { RegisterAiTool } from "@liveblocks/react";import { defineAiTool } from "@liveblocks/client";import { useState } from "react";
function Document() { const [document, setDocument] = useState("Hello world");
return ( <> <AiChat chatId="my-chat-id" /> <RegisterAiKnowledge description="The document's text" value={document} /> <RegisterAiTool name="edit-document" tool={defineAiTool()({ description: "Edit the document's text", parameters: { type: "object", properties: { text: { type: "string" }, }, }, execute: ({ args }) => { setDocument(args.text); return { data: {}, description: "Document updated" }; }, render: ({ stage }) => { if (stage !== "executed") { return <AiTool title="Updating document…" variant="minimal" />; }
return <AiTool title="Document updated" variant="minimal" />; }, })} /> </> );}

Custom components

You can use tools to display custom components inside the chat with the render function. These don’t have to be simple components, but can be complex, like forms, graphs, videos, callouts. When displaying a simple component, include an execute function, even if it’s empty, otherwise the chat will assume it’s a human-in-the-loop action.

<RegisterAiTool  name="graph-tool"  tool={defineAiTool()({    description: "Display a graph in the chat",    parameters: {},    execute: () => {},    render: () => {      return <MyGraph x={50} y={100} />;    },  })}/>

You can go further than this, and allow AI to create parameters which you can use in your custom component, for example x and y values on a graph.

<RegisterAiTool  name="graph-tool"  tool={defineAiTool()({    description: "Display a graph in the chat",    parameters: {      type: "object",      properties: {        x: { type: "number" },        y: { type: "number" },      },      required: ["x", "y"],      additionalProperties: false,    },    execute: () => {},    render: ({ args, stage }) => {      if (stage !== "executed") {        return <div>Loading...</div>;      }
return <MyGraph x={args.x} y={args.y} />; }, })}/>

AI will most likely write a response after using your tool, but you can prompt the AI to not respond by adding a description to execute.

<RegisterAiTool  name="graph-tool"  tool={defineAiTool()({    description: "Display a graph in the chat",    parameters: {      type: "object",      properties: {        x: { type: "number" },        y: { type: "number" },      },      required: ["x", "y"],      additionalProperties: false,    },    execute: () => {      return {        data: {},        description: "You’re displaying a graph. Do not respond further.",      };    },    render: ({ args, stage }) => {      if (stage !== "executed") {        return <div>Loading...</div>;      }
return <MyGraph x={args.x} y={args.y} />; }, })}/>

Fetching data for custom components

You can take custom components a step further by combining them with actions, and then showing the results inside the custom component. The result property contains the data returned from the action.

<RegisterAiTool  name="weather-tool"  tool={defineAiTool()({    description: "Get the weather for a location",    parameters: {      type: "object",      properties: {        location: { type: "string" },      },      required: ["location"],      additionalProperties: false,    },    execute: async ({ location }) => {      // { "temperature": 20, "condition": "sunny" };      const weather = await (location);      return { data: { weather } };    },    render: ({ stage, args, result }) => {\      if (stage !== "executed") {        return <div>Fetching weather…</div>;      }
return ( <MyWeatherComponent location={args.location} condition={result.data.weather.condition} temperature={result.data.weather.temperature} /> ); }, })}/>

args contains the arguments passed to the tool from AI, and result contains the data returned from the tool.

Query actions

A helpful way to use tools is to allow AI to query data from your application, such as documents, pages, or other data sources. If your application already contains a search function, you can easily plug it into your tool to create a powerful AI assistant. For example, this tool can search through documents by title, folder, and category.

<RegisterAiTool  name="find-documents"  tool={defineAiTool()({    description: "Find documents by title, folder, and category",    parameters: {      type: "object",      properties: {        title: { type: "string" },        folder: { type: "string" },        category: { type: "string" },      },    },    required: ["title", "folder", "category"],    additionalProperties: false,  },  execute: async ({ title, folder, category }) => {    const documents = await ({ title, folder, category });    return {      data: { documents },      description: documents.length > 0 ? `${documents.length} results` : "No results"    };  },  render: ({ stage, args, result }) => {    if (stage !== "executed") {      return <AiTool title="Fetching documents…" variant="minimal" />;    }
return null; }, })}/>

Since the query is happening on the front-end, you don’t need to implement separate authentication for your AI tool—it can leverage the same APIs that your users are already authorized to access.

Human-in-the-loop actions

Human-in-the-loop actions allow the user to confirm or deny an action before it’s executed. This is particularly useful when it comes to destructive and stateful actions, such as deleting a document, or sending an email. Confirmable actions like these will freeze the chat until the user responds, either by confirming or cancelling the action.

  1. User asks to delete a document

    User: "Can you delete my-document.txt?"
  2. AI calls the delete document tool, “Confirm” and “deny” buttons are displayed

    The chat waits for the user to respond.

  3. The user clicks “Confirm”

    The document is deleted.

  4. The chat unfreezes and is ready to continue

    AI: "I've deleted it! How else can I help you?"

To create confirmable actions, you must skip using the execute function, and instead move your logic to render, as we detail below. This will always freeze the chat until the user responds.

Default confirmation component

The easiest way to create confirmable actions, is to return the ready-made AiTool.Confirmation component in render. The confirm and cancel callbacks work very similarly to execute, and are triggered when the user clicks “Confirm” or “Cancel”.

<RegisterAiTool  name="delete-document"  tool={defineAiTool()({    description: "Delete a document by its ID",    parameters: {      type: "object",      properties: {        documentId: { type: "string" },      },      required: ["documentId"],      additionalProperties: false,    },    render: ({ stage, args, result, types }) => {      return (        <AiTool title="Delete document" variant="minimal">          <AiTool.Confirmation            types={types}            confirm={async ({ documentId }) => {              await (documentId);              return {                data: { documentId },                description: "The user chose to delete the document",              };            }}            cancel={() => {              return {                data: { documentId },                description: "The user cancelled deleting the document",              };            }}          />        </AiTool>      );    },  })}/>

In the cancel callback it’s important to let the AI know that the user cancelled the action, otherwise it may assume the action failed and try to run it again.

description: "The user cancelled deleting the document",

Building a custom confirmation component

By utilizing the respond argument in render, and the different stages of a tool’s lifecycle, you can build a fully custom confirmation component. These are the different stages:

  1. receiving - Displayed when AI is streaming in the parameters.
  2. executing - Displayed when the chat is frozen, and is waiting for a response.
  3. executed - Displayed after a response has been recorded.

Here’s how to leverage the different stages to create a custom “send email” tool—note how respond is used similarly to execute.

<RegisterAiTool  name="send-email"  tool={defineAiTool()({    description: "Send an email",    parameters: {      type: "object",      properties: {        emailAddress: { type: "string" },      },      required: ["emailAddress"],      additionalProperties: false,    },    render: ({ stage, args, respond, result }) => {      // `emailAddress` param is still streaming in, wait      if (stage === "receiving") {        return <div>Loading...</div>;      }
// The tool is waiting for `respond` to be called if (stage === "executing") { return ( <form onSubmit={async (e) => { e.preventDefault(); const message = e.target.message.value; await (args.emailAddress, message);
// Similar to `execute`/`confirm`, let AI know it succeeded respond({ data: { emailAddress: args.emailAddress, message }, description: "You sent an email for the user", }); }} > <textarea name="message" /> <button type="submit">Send</button> <button type="button" onClick={() => // Similar to `execute`/`cancel`, let AI know the user cancelled respond({ data: { emailAddress: args.emailAddress, message }, description: "The user cancelled sending an email", }) } > Cancel </button> </form> ); }
// `respond` has already been called, show the result return ( <div> You sent an email to {args.emailAddress}: "{result.data.message}". </div> ); }, })}/>

A tool’s stages never reset, which means that once an email has been sent, the tool will remain in the executed stage showing “You sent an email to…”, even after refreshing the page. This enables stateful interaction.

AI presence & streaming

You can stream in tool results as they arrive, allowing you to show live updates and AI presence in your application. An example would be a tool that generates documents—you can display the partially generated document as each chunk arrives, for example:

  1. { "content": "Here’s" }
  2. { "content": "Here’s my suggestions" }
  3. { "content": "Here’s my suggestions for a" }
  4. { "content": "Here’s my suggestions for a marketing email!" }

To show this inside the tool, you can use render. While you’d normally access AI-generated arguments via args, but to access streaming results, you can use partialArgs to get the whole stream up to this point. This all occurs during the receiving stage, and as each chunk is received, render will re-render and update the UI.

<RegisterAiTool  name="create-document"  tool={defineAiTool()({    description: "Create a document",    parameters: {      type: "object",      properties: {        content: { type: "string" },      },      required: ["content"],      additionalProperties: false,    },    execute: () => {},    render: ({ stage, partialArgs }) => {      // Document is streaming in      if (stage === "receiving") {        return <div>Document: {partialArgs.content}</div>;      }
// Document has fully streamed in return <div>Document: {args.content}</div>; }, })}/>

RegisterAiTool allows you to stream in strings, objects, and arrays, and you can use them in render as they arrive.

Streaming into a document with AI presence

To stream results outside of the chat window, and show AI presence, you can call functions inside render. In this case, we’re updating a document outside of the chat window, and inside the chat, we’re showing a simple AiTool message.

<RegisterAiTool  name="create-document"  tool={defineAiTool()({    description: "Create a document",    parameters: {      type: "object",      properties: {        content: { type: "string" },      },      required: ["content"],      additionalProperties: false,    },    execute: () => {},    render: ({ stage, partialArgs }) => {      // Document is streaming in, update document and presence      if (stage === "receiving") {        (partialArgs.content);        (partialArgs.content.length);        return <AiTool title="Creating document…" />;      }
// Final chunk of stream is here, update document and end presence if (stage === "executing") { (args.content); (null); return <AiTool title="Creating document…" />; }
// Document has fully streamed in return <AiTool title="Document complete!" />; }, })}/>

Advanced JSON schema

Up to this point, we’ve only covered generating objects and strings with AI, but JSON schema allows for more complex data types, such as numbers, arrays, enums, as well as constraints, and hints for AI.

properties: {  type: "object",
// String name: { type: "string" },
// Number age: { type: "number" },
// Boolean sendEmail: { type: "boolean" },
// Union count: { type: ["string", "number", "null"] },
// Array pets: { type: "array", items: { type: "string" } },
// Description, helps AI understand the type job: { type: "string", description: "A job title, 1-2 words" },
// String constraints fullName: { type: "string", minLength: 1, maxLength: 100, pattern: "^[A-Z][a-zA-Z'-]+(?: [A-Z][a-zA-Z'-]+)+$", },
// Number constraints ageInYears: { type: "number", minimum: 18, maximum: 100 },
// Combination of all people: { type: "array", description: "A list of all people", items: { type: "object", properties: { sendEmail: { type: "boolean", description: "Notify them by?" }, name: { type: "string", description: "First and last names" }, age: { type: "number", minimum: 18, description: "Age in years" }, job: { type: ["string", "null"], description: "`null` for unemployed" }, } }, },
// Recommended to always set the following additionalProperties: false, required: ["name", "age", "sendEmail", "job", "fullName", "ageInYears", "pets", "people"],}

Updating tools

After a tool has been used in the chat, its AI-filled parameters are permanently set. This leaves us with a problem—if we remove or change the tool’s parameters, old versions of the tool will display an empty space in the chat (or in development mode, an error box).

// ❌ Will render `null` in the chat instead of a tool‎<RegisterAiTool  name="send-email"  tool={defineAiTool()({    description: "Send an email",    parameters: {      type: "object",      properties: {-       email: { type: "string" },+       emailAddress: { type: "string" },      },-     required: ["email"],+     required: ["emailAddress"],      additionalProperties: false,    },  })}  // .../>

Care needs to be taken when creating a new version of each tool if you wish for old chats to display fully working components. For this reason, we recommend versioning your tools, and disabling the old tool until you’re sure the old tools aren’t needed anymore. This will ensure the tool won’t be used in new chats, but the old component will still render correctly.

import { RegisterAiTool } from "@liveblocks/react";import { defineAiTool } from "@liveblocks/client";import { AiChat } from "@liveblocks/react-ui";
function Chat() { return ( <> <AiChat chatId="my-chat-id" /> <RegisterAiTool name="send-email" // Disabling the old tool so it won't be used in new chats enabled={false} tool={defineAiTool()({ description: "Send an email", parameters: { type: "object", properties: { email: { type: "string" }, }, required: ["email"], additionalProperties: false, }, })} // ... /> <RegisterAiTool // Version 2, which will be used in new chats name="send-email-v2" tool={defineAiTool()({ description: "Send an email", parameters: { type: "object", properties: { emailAddress: { type: "string" }, }, required: ["emailAddress"], additionalProperties: false, }, })} // ... /> </> );}

We use cookies to collect data to improve your experience on our site. Read our Privacy Policy to learn more.