We got to talk with Lijuan Qin, Head of Product for Zoom AI, on the latest episode of Deployed. She talked about Zoom’s new agent platform, the complexity of moving from Q&A chatbots to agents that complete tasks automatically, and how that changes what you measure and what "good" looks like for an AI product.
Lijuan has a PhD in AI, and she spent 20 years at Microsoft working on natural language processing and video understanding before joining Zoom. She's watched this field go from academic NLP research to generative agents, and that long view comes through in the conversation.
We get into Zoom's "conversation to completion" vision, how her team thinks about quality for a radically open-ended product, and why she believes the right AI metrics are about what got done. This is a valuable listen especially for product managers and others thinking about high-level product strategy for AI agents.
The episode is live on Spotify, Apple Podcasts, YouTube, and you can watch the whole thing below. Read on for some of our favorite highlights.
Stop Measuring AI Like a Search Engine
Lijuan's framework for AI quality starts with a reframe. If you're scoring accuracy primarily on individual queries, you might be treating your product more like a search engine. As the Zoom AI platform has moved toward agents that complete workflows for people, her team has started to ask: Did the AI actually help the person finish their work? Did the task get completed? Did the output hold up, or did the person have to redo things?
"When we're evaluating, it's not about how many questions you're asking, or how many meeting summaries you checked — that's just a note-taking assistant. We're asking: what are you going to complete? What's the task you're trying to get done after the meeting? We call this AI-first. It's intent-driven, rather than which tool to use. We look at the output value rather than how many times you're using it. If you're using it a lot, maybe it's because we're not giving you the right answer."
That last line is an important reminder for product builders. High engagement can be a negative signal with agents. If someone keeps going back and forth with your agent, it might be because the product is failing them. Instead of engagement, Lijuan's team uses weekly active retention as their north star. People coming back means the product has become part of how they work. People churning after interactions means it isn't working.
As products shift from chatbot-shaped to agent-shaped, the difference between those two things widens.
Conversation to Completion
We all likely think of Zoom first as a video conferencing tool. Lijuan talks about how Zoom's advantage in AI isn't the video call, it's the conversation data flowing through every call, every phone session, every contact center interaction. Her team is trying to use that data to close the gap between talking about work and getting it done.
Their team knows action items from meetings are mostly broken today. Every AI note-taker produces a list of to-dos, and then nothing happens. Those items sit in a doc nobody opens again. Zoom wants their AI agents to do the follow-through: kick off workflows, draft documents, handle the steps that come after the meeting ends.
"People already take it for granted — oh, AI can do my note-taking, I have a list of action items. But the secret is: the action items, the to-dos, are not done post-meeting. We're really doing the workflow, helping you get things done, get all the actions happening right now."
Lijuan frames Zoom AI as something that's been sitting on your shoulder through the meeting and already knows what needs to happen. It doesn't wait for you to ask. It acts, with your oversight and approval.
There’s a good design question underneath for every team building an agent to answer: Do you wait for the user to tell the agent what to do, or does the agent infer intent from context and act proactively? There’s more risk in the latter perhaps, but Zoom is betting on the second path. That's a harder product to build and a much harder one to evaluate.
How Zoom Scopes Failure
We asked Lijuan how she runs experimentation at Zoom's scale, especially with enterprise customers. She went straight to decision frameworks.
Her team defines the assumptions and success criteria before they build, and they implicitly keep scope small and focused for experiments. Everyone sees the same framework, so people can make decisions and move without waiting in approval queues.
"It's always about coming back to that decision framework. We know what our assumptions are, what we're trying to verify. So the scope of failure is really boxed — there's not much consequence beyond it. The decision framework is clear, you know what the consequence is, and most of the time there's no terrible consequence. It's just verifying what works and what doesn't. That's a different way to define what failure is."
If the downside of an experiment is learning, and everyone knows that going in, the team doesn't need permission to run it. The framework gives permission.
On the enterprise side, Lijuan described some patterns for shipping AI into large organizations that others might recognize:
AI features are default-on for consumers, default-off for enterprises
Staged 30-60-90 day rollouts
Admin controls and gradual opt-in
Give enterprise customers control over the pace and you can still move fast shipping.
—
Lijuan's parting advice: be clear about who you're building for and what the "aha moment" is in your product. Pick a core problem and solve it thoroughly, rather than adding AI incrementally across a dozen features.
Her dream, she told us, is wrapping up a full day of meetings in five minutes because Zoom AI already ran the workflows that matter, and she can just review and approve. It’s a product vision all of us who join lots of meetings can get excited about. And the principles she's using to get there (e.g. measure task completions over interactions, scope experiments so failure is cheap, etc.) are ones any team building AI products can steal.
For more on Zoom AI Companion, visit ai.zoom.us. You can follow Lijuan on LinkedIn.
Check out the full episode on Spotify, Apple Podcasts, YouTube.
First Published
Authors

Ian Cairns
Categories
Podcast




