We’ve released a number of improvements to AI Copilots, including new ways to submit knowledge sources, better scroll behavior, and the ability to customize markdown components.
AI chats present a unique UX challenge due to their long, streaming replies—you
can’t use scroll behavior like you’d find in WhatsApp or iMessage, as any
messages pinned to the bottom will move upwards as they stream in, making them
difficult to read.
For this reason, we’ve polished the scroll behavior of our AI chats, sending the
user’s message to the top of the window, allowing the reply to stream in below.
With the new behavior, chats become far easier to read. This is enabled by
default in all AI chats.
When building with AI, being able to submit knowledge is important as copilots
should be able to understand the context related to any questions. This July,
we’ve enabled submitting an entire website as a back-end knowledge source, via
crawling, or via sitemap. You can submit websites through the
dashboard.
Knowledge remains a top priority for the AI Copilots team, and we’re currently
working on further additions in this area.
AI copilots can use markdown components when they reply to users, meaning they
can write headings, use italics, send blockquotes, and more. We’ve added the
ability to customize these markdown components, and display them however you
like.
An example of how to use this is with code blocks—you can now pass a custom
highlighter component, allowing your AI to style code syntax, and in a way
that’s consistent with your brand.
Without the custom component, the code in the snippet would appear as plain
black text. To enable this, you can pass various markdown components to the
markdown property, such as CodeBlock.
Our AI Support Chat example demonstrates how to create a
chat that can assist users on your contact page, acting as the first line of
support. If AI doesn’t know the answer, it’ll allow users to create a ticket for
a human agent to handle.
Here’s a list of other improvements in our changelog since our
last update:
Add blurOnSubmit prop to Composer (also available on the Composer.Form
primitive and as blurComposerOnSubmit on Thread) to control whether a
composer should lose focus after being submitted.
useErrorListener now receives "LARGE_MESSAGE_ERROR" errors when the
largeMessageStrategy option isn’t configured and a message couldn’t be sent
because it was too large for WebSocket.
Add tenantId to identifyUser method as an optional parameter.
Improve URL sanitization in comments.
Adds experimental setting LiveObject.detectLargeObjects, which can be
enabled globally using LiveObject.detectLargeObjects = true (default is
false). With this setting enabled, calls to LiveObject.set() or
LiveObject.update() will throw as soon as you add a value that would make
the total size of the LiveObject exceed the platform limit of 128 kB. The
benefit is that you get an early error instead of a silent failure, but the
downside is that this adds significant runtime overhead if your application
makes many LiveObject mutations.
defineAiTool()() now takes an optional enabled property. When set to
false, the tool will not be made available to the AI copilot for new/future
chat messages, but still allow existing tool invocations to be rendered that
are part of the historic chat record.
RegisterAiTool now also takes an optional enabled prop. This is a
convenience prop that can be used to override the tool’s enabled status
directly in React.
Reasoning parts in AiChat are now automatically collapsed when the reasoning
is done.
Add collapsible prop to AiTool to control whether its content can be
collapsed/expanded.
Add InboxNotification.Inspector component to help debugging custom inbox
notifications.
Add support for Redux v5.
Add lb-lexical-cursors class to the collaboration cursors’ container.
Fix: Default z-index of Lexical collaboration cursors, and make them inherit
their font family instead of always using Arial.
Improve mentions’ serialization.
Fix: Copilot id not being passed to 'set-tool-call-result' command that is
dispatched when a tool call is responded to. Previously, we were using the
default copilot to generate messages from the tool call result.
Fix: AiChat component not scrolling instantly to the bottom on render when
messages are already loaded.
Fix: also display errors in production builds when they happen in render
methods defined with defineAiTool(). Previously, these errors would only be
shown during development.
Fix: An issue with the render component of tool calls not being displayed
correctly when the tool call signal was read before it was registered.