It’s easier than ever to deeply integrate advanced AI into your app with
Liveblocks AI Copilots. Start with our pre-built chat components
for React, and add knowledge, tools, and custom components.
Chats are powered by our WebSocket collaboration layer, and update in real-time,
even with multiple browser tabs open. If you close the page, the chat will keep
streaming in, and you can re-open it to continue the conversation.
Each chat in your app is tied to an authenticated user, and each user can have
multiple chats, and their chats are stored permanently. Replies can stream into
multiple chats at once, and switching between chats will never cause
interruptions.
Knowledge is a way to pass information to your AI, so that it can understand
context and provide intelligent answers. There’s two ways to add it, on the
front-end and on the back-end.
Front-end knowledge is ideal for passing relatively small amounts of data that’s
relevant to the current user or page. For example, you could pass the user’s
payment plan, the user’s information, or the current document’s text. Here’s how
to add it with
RegisterAiKnowledge.
<RegisterAiKnowledgedescription="The current user's payment plan"value="Enterprise"/> <RegisterAiKnowledgedescription="The current user's info"value={{ name:"Jody Hekla", email:"jody.hekla@example.com", teams:["Engineering","Product"],}}/> <RegisterAiKnowledgedescription="The current document's content"value={` # Untitled This is an untitled document.`}/>
The AI will have access to this information when it’s generating a response. For
example, if you ask “Which plan am I on?” it’ll reply with “You're on the
Enterprise plan”.
Back-end knowledge is ideal for passing large amounts of data, for example
documentation and manuals. The AI will be able to read your knowledge base, and
accurately answer questions using your information. You can import PDF files,
images, and web pages, when managing your copilot from the dashboard.
By default, Liveblocks has an AI model set up, but you can also choose to create
a copilot with your chosen AI model from the Liveblocks dashboard. You can
define AI prompts, back-end knowledge sources (e.g. PDFs, websites), and a
number of different settings for your copilot.
OpenAI, Anthropic, and Gemini models are supported. Copy the copilotId from
the dashboard to use it with your chat.
Tools allow you to add complex interactions into your AI chat, for example
allowing AI to autonomously take actions, interact with your front-end, and
render custom components. Tools are implemented with
RegisterAiTool.
Above is an example of a human-in-the-loop action that allows AI to invite
members to a project. When confirm is clicked, a member is added to the page and
a toast is triggered.
By defining a tool with an execute function, AI can call a function in your
front-end, when it think it’s relevant. A simple example is a tool that shows
toast notification on the screen. Whenever you say “Show me a toast”, AI will
call the execute function, and render it.
<RegisterAiTool name="send-toast-notification" tool={defineAiTool()({ description:"Send a toast notification", execute:async()=>{__toast__("Here's a toast!"); return{ data:{}, description:"You sent a toast",};},})}/>
You can take this a step further by adding parameters to the tool, in this case
a message, allowing the AI to decide what to write in the toast. You can show
UI in the chat, letting the user know a toast was sent, using render.
<RegisterAiTool name="send-toast-notification" tool={defineAiTool()({ description:"Send a toast notification", parameters:{ type:"object", properties:{ message:{ type:"string", description:"The message to display in the toast",},},}, execute:async({ message })=>{__toast__(message); return{ data:{}, description:"You sent a toast",};}, render:()=><AiTooltitle="Toast sent"icon="🍞"/>,})}/>
The AiTool component renders
pre-built UI that matches the chat.
You can create human-in-the-loop actions, where the AI asks the user to confirm
an action, before it’s executed. This is useful for actions that are destructive
or irreversible, for example deleting a file. To implement this, don’t define
execute, and instead use
AiTool.Confirmation
inside render.
<RegisterAiTool name="delete-file" tool={defineAiTool()({ description:"Delete a file from the user's workspace", parameters:{ type:"object", properties:{ fileName:{ type:"string", description:"Name of the file"},},}, render:({ stage, args, result, types })=>{return(<AiTooltitle={stage ==="executing"?"Delete file?":"File deleted"}icon="🗑️"> <AiTool.Confirmation types={types} confirm={async({ fileName })=>{await__deleteFile__(fileName);return{ data:{ deletedFileName: fileName },};}} cancel={()=>{}} > Are you sure you want to delete {args.fileName}?</AiTool.Confirmation></AiTool>);},})}/>
The example above will show a confirm/cancel box in the chat, and the confirm
function will be triggered when the user clicks the button.
You can render fully custom components inside the chat, for example, below I’m
adding a button that creates a new project. Whenever AI hears “Create project”,
it’ll render the button below its message.
<RegisterAiTool name="create-project" tool={defineAiTool()({ description:"Create a new project", render:()=>(<buttonclassName="bg-blue-500 rounded-md p-2 text-white"onClick={()=>{__newProject__();}}> Create project</button>),})}/>
You also can create custom confirm/dialog boxes using render.
Our AI Dashboard Reports example contains an
AI pop-up chat that allows you to ask questions about the transactions and
invoices represented on the page. Additionally, it can navigate you to different
pages, invite new users, send unpaid invoice reminders, and share how many seats
are left on your plan.
Our AI Popup Chat example demonstrates how to create a
floating pop-up chat in the corner of your application. It has a panel for
switching between different chats, and a button to open a new chat.
Rename UPDATE_USER_NOTIFICATION_SETTINGS_ERROR to
UPDATE_NOTIFICATION_SETTINGS_ERROR when using useNotificationSettings or
useUpdateNotificationSettings.
The onMentionClick prop on Thread and Comment now receives a
MentionData object instead of a userId string.
The Mention component on the Comment.Body and Composer.Editor primitives
now receives a mention prop instead of a userId one.
The MentionSuggestions component on the Composer.Editor primitive now
receives a mentions prop instead of a userIds one, and the
selectedUserId prop has been renamed to selectedMentionId.
Rename LiveblocksUIConfig to LiveblocksUiConfig for consistency with other
Liveblocks APIs.
Remove deprecated htmlBody/reactBody properties from
prepareThreadNotificationEmailAsHtml /
prepareThreadNotificationEmailAsReact, use body instead.
Remove htmlContent/reactContent properties from
prepareTextMentionNotificationEmailAsHtml /
prepareTextMentionNotificationEmailAsReact, use content instead.
The prepareTextMentionNotificationEmailAsReact and
prepareTextMentionNotificationEmailAsHtml functions’ returned data changed
slightly:
The id property is now named textMentionId, it refers to the mention’s
Text Mention ID, not the user ID used for the mention
The id property now refers to the mention’s ID, as in the user ID used for
the mention
The element prop received by the Mention component in
prepareTextMentionNotificationEmailAsReact now contains an id property
instead of userId, and a new kind property to indicate the mention’s kind.
The getMentionedIdsFromCommentBody utility has been replaced by
getMentionsFromCommentBody.
Add InboxNotification.Inspector component to help debugging custom inbox
notifications.
Add support for Redux v5.
Fix default z-index of collaboration cursors, and make them inherit their
font family instead of always using Arial.
Add lb-lexical-cursors class to the collaboration cursors’ container.
Improve URL sanitization in comments.
Improve mentions’ serialization.
Adds experimental setting LiveObject.detectLargeObjects, which can be
enabled globally using LiveObject.detectLargeObjects = true (default is
false). With this setting enabled, calls to LiveObject.set() or
LiveObject.update() will throw as soon as you add a value that would make
the total size of the LiveObject exceed the platform limit of 128 kB. The
benefit is that you get an early error instead of a silent failure, but the
downside is that this adds significant runtime overhead if your application
makes many LiveObject mutations.
Fix: also display errors in production builds when they happen in render
methods defined with defineAiTool(). Previously, these errors would only be
shown during development.
Fix an issue with the render component of tool calls not being displayed
correctly when the tool call signal was read before it was registered.
Fix caching issue when editing notification settings.