Solving Fundamental Customer Problems with AI: A Conversation with Loom Design Principal, Sean Thompson
Sep 1, 2023
Sean Thompson is a principal product designer at Loom – our favorite video communication app! He’s helped lead their work from a vision and craft perspective to make use of LLMs across Loom. Just this week, they announced Loom AI is now in GA as a paid add-on for all customers (more on that here: loom.com/ai), and it immediately edited out 48 “umm’s” from my first recording. 🥳
We were fortunate to work with Sean at Twitter, where we first got to experience what it looks like for senior designers to take the lead showing their teams what a product experience could become. He’s played a similar role at Loom when it comes to incorporating AI, and we found it especially compelling to hear him talk about the different ways they’re using AI to help customers feel more confident recording themselves.
We asked him to share his perspective & learnings as a design leader driving the adoption of LLMs in an established product.
Tl;dr – don’t get lost in the hype. Focus on real problems for customers. Impact follows!
Read on for more.
Watch Loom's launch video for Loom AI
Ian Cairns: First, it would help to anchor on where your team is on the journey of building with LLMs. Can you give us a quick summary of what Loom is, and how you’ve tried to improve the product so far with AI?
Sean Thompson: Loom is increasingly becoming a noun that refers to sharing something at work with video. It’s a communication medium that records you and whatever is on your screen simultaneously, and it’s positioned to be an alternative to many emails, chats, and meetings. Common use cases include explaining bugs, collaborating across timezones, project walk-throughs, and engaging with potential customers.
Loom has been heavily investing in AI because we know that most people in the workplace are new to recording themselves with video, and we believe that we can better help people both create and watch Looms through AI-assisted features.
We just launched Loom AI: a paid collection of “assistance” features that augment recordings with automatic titles, summaries, tasks, chapters, take out filler words (um, like, etc), awkward-silence gaps, and more.
We see this as our first big step towards creating the equivalent of an attentive camera crew for anyone recording a Loom.
Ian: You led the early pitch to incorporate AI into the Loom customer experience, which might surprise some folks since you’re a product designer. Can you talk about what caused you to advocate for this, and what problems you were trying to solve?
Sean: The Design and Research team at Loom has helped our organization familiarize itself with one of our biggest challenges:
Recording yourself for an invisible, async, audience of coworkers is a new behavior that can be intimidating, and norms are not yet established.
With this pain point in mind, after reflecting on online conversations about how tools like Loom may help people with neurodiversity communicate, and, with the entire industry and our leadership team wondering where the rise of GPT will lead us, I wondered what a future might look like where Loom could greatly enhance any video recording to ensure it looks and sounds like the clearest and most effective version of someone speaking. We saw GPT rapidly challenging what it means to write something with text, and wondered longterm what it means for video.
This idea really excited me because I believe so many great ideas are trapped in the minds of people who may struggle to communicate. I wondered out loud about it with our CEO, VP of Design, COO, head of Research, and a Principal Engineer, who are all very design-minded, and we came up with a compelling final vision together.
Ian: Tactically, what can you tell us about how you and your team have worked together to ship these features? What’s been the role of product designers relative to engineers, PMs and other team members?
Sean: Loom AI is something we charge for, so it was even more critical that we produced a high-quality, finished product that would not only drive initial interest, but also deliver recurring value.
To get there, we frequently informed our approach through demos from our Research, Design, Engineering, Product, and Marketing teams—nearly every discipline at Loom informed the design and build of what we shipped this week. Specifically:
Every discipline worked on prompt engineering, which helped us experiment from very different vantage points.
Our VP of Design, Christina Nguyen White, formed a pod of AI-focused Product and Brand design folks to ensure we designed holistically.
Our Research team concept tested everything we produced with near-instant turnaround, to ensure we were grounded more in customer feedback than our excitement about AI itself.
Our Engineering team partnered with external and internal partners to rapidly experiment with different prototypes for filler word removal, silence removal, and more.
It also was an initiative where many of our feature teams shipped together, so coordination across a single mega-initiative was key.
I loved how with this work no one role was expected to inform the conversations around our strategy and execution because of the complexity/infancy of this space.
Ian: What’s been the hardest part (or parts?) of getting it right? I’m curious if you have any helpful lessons learned from solving problems on the path to production.
Sean: There are so many difficult parts to shipping a compelling product that people are willing to pay for, especially when it’s built on an emerging foundation like AI.
The easiest part for sure was getting excited about our ultimate vision. The hardest part has been determining what AI allows us to do today versus tomorrow, and how best to leverage what’s possible today to deliver solid customer value worthy of payment. We had to figure out how to ship in the short term while continuing to invest in what we see as the long-term potential of AI+video messaging.
It also is just an incredibly complex task to work as a united team when shipping so many separate features—it took so much discipline from all of our designers, engineers, researchers, leadership team and more.
Ian: We know it’s early — it’s early for most companies who have shipped new LLM-powered features to customers. Can you share any early signals or learnings from your customers? And how do you evaluate the impact of what you’ve built?
Sean: It is very early! The precursor to the launch of Loom AI was our beta launch earlier this year of auto-titling and summarization of Looms using AI. That launch gave us immense confidence to further invest in this space.
Auto-titling and summarizing led to a very large increase in the number of people who engage with Looms, because the videos were better contextualized in places where they get embedded (Slack, Linear, Jira, Email, etc). Loom titles used to be timestamps by default unless someone edited them, and you had to watch them to know what the content was about.
The auto-titling feature might seem like one of the simplest things you could build with LLMs, but it was actually one of our biggest metric drivers ever from a new feature since people could immediately understand what they’d be watching.
Ian: As you think about AI becoming a more accessible tool to use for building software products, what advice would you give to fellow product designers? Anything you’ve found particularly helpful that they should consider learning or doing now?
Sean: I think the only advice I have is to see AI for what it is: AI is a very powerful ingredient that can help solve existing real world problems in compelling ways — if we are careful not to over-index on the technology itself.
Regardless of how exciting this space is, it’s important that we stick to what we know is behind every great product: Clear, distinct, and enduring customer-facing value.