We've released improvements to AI Copilots, including streaming AI tool results, partial markdown streaming, and new APIs for copilots. Additionally, we've added a new Comments option, and have implemented SAML SSO, directory sync, and MFA for Enterprise customers.
AI Copilots allows you to add tools to your chat, giving your AI the ability to
take actions. We’ve added support for streaming in tool call results, meaning
you can start seeing the results as soon as it’s executed.
This is particularly helpful when AI tools are used to generate large amounts of
content, such as reports, documents, or code. Below, you can see how the tool
streams updated code into the editor as it’s generated.
This is enabled by default in AI tools, and it’s easy to implement and use.
AI tools allow you to define parameters, which the AI can stream into. For
example, the tool above uses a code parameter, where AI will place generated
code. We can tell the AI that this is a string.
<RegisterAiTool name="edit-code" tool={defineAiTool()({ description:"Edit the code", parameters:{ type:"object", properties:{ code:{ type:"string"},},}, // Handle behavior and UI// ...})}/>
Inside the render function we can handle the behavior and show some UI. Tools
have three stages of execution, and in the receiving stage you can access the
partially streamed results, for example partialArgs.code is where you can
access the code being streamed in.
As each part of the stream arrives, we set the current code in our editor with
partialArgs.code. Then after completion, we set the final code with
args.code, and tell the AI know what it’s displaying with respond.
<RegisterAiTool name="edit-code" tool={defineAiTool()({ description:"Edit the code", parameters:{ type:"object", properties:{ code:{ type:"string"},},},render:({ stage, partialArgs, args, respond })=>{// Code is streaming in, add it to the editorif(stage ==="receiving"){setCode(partialArgs.code);return<AiTooltitle="Generating code…"/>;} // Code stream completed, set final code, let AI knowif(stage ==="executing"){setCode(args.code);respond({ data:{}, description:"You're displaying the code",});} // Render completion messagereturn<AiTooltitle="Code generated"/>;},})}/>
The AiTool component is
returned to show a small piece of UI in the chat, letting end users know what’s
happening.
We’ve improved the streaming of markdown in AI chats, allowing you to render
partial markdown as it’s streamed in. This means that you’ll no longer see
markdown syntax briefly flash into view, before disappearing.
This feature is enabled by default in all AI chats.
It’s now possible to programatically manage copilots and knowledge sources with
our new Node.js methods and REST APIs. This means you no longer need to use the
dashboard to create copilots, edit prompts, upload knowledge, and more.
// Create a new copilotconst copilot =await liveblocks.createAiCopilot({ name:"My AI Assistant", systemPrompt:"You are a helpful AI assistant for our team.", provider:"openai", providerModel:"gpt-4", providerApiKey:"sk-...",}); // Upload a PDF file as a knowledge sourceconst{ id }=await liveblocks.createFileKnowledgeSource({ copilotId: copilot.id, file: pdfFile,}); // Get a list of all knowledge sourcesconst{ data: sources, nextCursor }=await liveblocks.getKnowledgeSources({ copilotId: copilot.id,});
In total, there are thirteen new APIs—learn more in our documentation under
@liveblocks/node and
REST API.
You can upload back-end knowledge to your AI copilot, allowing it to accurately
answer questions using information from your knowledge base. You can now control
when knowledge activates, making your AI quicker to respond when it doesn't need
it, and more intelligent when it does.
To set your knowledge prompt, visit the dashboard, and find the
copilot you’d like to edit. Define when AI should use your knowledge, and hit
save.
You can now filter for specific AI chats with
useAiChats, allowing you to
render different lists of chats in different places. Querying is enabled via
custom metadata, which you can set when you create a chat.
// Create a chat with `color="red"` metadatacreateAiChat({ id:"my-ai-chat", metadata:{ color:"red",},}); // Only returns chats with `color="red"` metadataconst{ chats }=useAiChats({ query:{ metadata:{ color:"red",},},});
More complex querying is also possible, for example you can filter for metadata
that contains a list of items, or match chats where metadata doesn’t exist.
const{ chats }=useAiChats({ query:{ metadata:{// Get chats with both `urgent` and `billing` tags tag:["urgent","billing"], // Get chats _without_ `archived` metadata archived:null,},},});
In Comments, you can now enable a “Show more replies” button in
individual threads. You can set exactly which comments should be displayed, and
how many. Below you can see an example of different options.
To enable this in your
Thread component, set the
maxVisibleComments
property. Here are the ways it’s used in the video.
// Show newest<ThreadmaxVisibleComments={4}.../> // Show oldest<ThreadmaxVisibleComments={{ max:4, show:"oldest"}}.../> // Show both<ThreadmaxVisibleComments={{ max:4, show:"both"}}.../>
We’ve added new login options for Enterprise customers, including SAML single
sign-on (SSO), directory sync, and multi-factor authentication (MFA).
SAML single-sign on enables teams to
enforce authentication via identity providers like Okta, Azure AD, Google
Workspace, or OneLogin using SAML, simplifying account management and helping
organizations meet internal security requirements.
When SSO is paired with
directory sync, teams can
manage organization membership directly from their identity provider, reducing
manual overhead and aligning with standard enterprise access control practices.
Additionally, we’ve added support for
multi-factor-authentication,
allowing teams to enforce an additional layer of security for their
organization.
We’ve just updated our Figma kit with new AI Copilots components, meaning you
can prototype advanced AI chats in your product. Each component corresponds to a
real component in our React package, so you can easily turn your design into a
production-ready application in a day.
Update useSendAiMessage to use the last used copilot id in a chat when no
copilot id is passed to the hook or the method returned by the hook.
In RoomProvider, initialPresence and initialStorage now get re-evaluated
whenever the room ID (the id prop) changes.
Add a minimal appearance to AiTool via a new variant prop.
Improve Markdown rendering during streaming in AiChat: incomplete content is
now handled gracefully so things like bold, links, or tables all render
instantly without seeing partial Markdown syntax first.
Render all messages in AiChat as Markdown, including ones from the user.
Improve shimmer animation visible on elements like the
"Thinking…"/"Reasoning…" placeholders in AiChat.
Improved LiveList conflict resolution that will keep the conflicting element
closer to its intended destination.
Scroll thread annotations into view when a thread in AnchoredThreads is
selected, similarly to @liveblocks/react-lexical.
More info on styling AI chat components.
Disambiguate semantics for LiveList.delete().
Add onComposerSubmit callback to AiChat triggered when a new message is
sent. It can also be used to customize message submission by calling
useSendAiMessage yourself.
Overrides and CSS classes for AiChat's composer have been renamed:
useSendAiMessage now accepts passing the chat ID and/or options to the
function rather than the hook. This can be useful in dynamic scenarios where
the chat ID might not be known when calling the hook for example.
useCreateAiChat now accepts a chat ID as a string instead of
{ id: "chat-id" }.
Allow using custom composers in FloatingComposer via the
components={{ Composer }} prop.
Add ATTACH_THREAD_COMMAND command to manually create a thread attached to
the current selection.
Allow editing first and last name in personal settings.
Improve Markdown lists in AiChat: better spacing and support for arbitrary
starting numbers in ordered lists. (e.g. 3. instead of 1.)
Add MAU breakdown to the historical usage table on the “Billing & usage” page
(MAU used / Non-billed MAU).
Support OpenAI compatible AI models in AI Copilots.
Support Gemini 2.5 Pro and Gemini 2.5 Flash Thinking models in AI Copilots and
remove support for the corresponding preview models.
Fix: LiveblocksYjsProvider.getStatus() returning incorrect synchronization
status for Yjs provider.
Fix: useSyncStatus returning incorrect synchronization status for Yjs
provider. We now compare the hash of local and remote snapshot to check for
synchronization differences between local and remote Yjs document.
Fix: knowledge passed as a prop to AiChat no longer leaks that knowledge to
other instances of AiChat that are currently mounted on screen.
Fix: a bug that caused unreliable storage updates under high concurrency.
Fix: an issue that could cause LLM responses to appear to "hang" if the token
limit got exceeded during the response generation. If this now happens, the
response will indicate a clear error to the user.
Fix: Composer uploading attachments on drop when showAttachments is set to
false.
Fix: attachment names showing URL-encoded characters. (e.g. a%20file.txt
instead of a file.txt)
Fix: race condition where AI tools were not always executing. This could
happen when using useSendAiMessage first and then immediately opening the
<AiChat /> afterwards.
Fix: Markdown rendering of HTML tags in AiChat. (e.g. "Use the <AiChat />
component" would render as "Use the `` component")
Fix: a bug where copilot id wasn't passed when setting tool call result if a
tool call was defined with execute callback.
Fix: improved Markdown streaming in AiChat only being enabled in reasoning
blocks, it’s now enabled for all Markdown.
Fix: a bug where deleting a thread/comment from Tiptap would also remove any
comments contained within it.