Webinar: The new AI development workflow

Building Generative AI software requires a new development workflow. Everyone’s trying to adapt to building with generative AI - including how to ship faster with greater confidence, and then how to improve products over time.

In this webinar we'll share best practices and lessons learned from working closely with leading AI product development teams to build and run their AI product feedback loops. This content is especially for team leads and managers thinking about how to improve their team’s structure and workflows, make better use of both offline and online evals, and ultimately deliver great AI products and agents to customers.

What You'll Learn:

What’s different from traditional software development workflows, and traditional ML workflows
How leading generative AI teams run the "Build-Test-Observe-Iterate" process for improving AI product quality
How to create a tighter feedback loop between engineers, ML teams, product managers, and domain experts.
How to set up a new AI development workflow for your team that lets you move faster, improve quality, and ship with confidence

When?

Wednesday, August 13, 10am MT (9am PT, 12pm ET)

You'll Hear From:

Eric Ryan, Co-Founder & CTO of Freeplay

Eric Ryan, Co-Founder & CTO of Freeplay

Ian Cairns, Co-Founder & CEO of Freeplay

Ian Cairns, Co-Founder & CEO of Freeplay

Jeremy Silva, Product Lead

Jeremy Silva, Product Lead

Sign up to attend or receive the recording after!

AI teams ship faster with Freeplay

AI teams ship faster with Freeplay

AI teams ship faster with Freeplay

"Freeplay transformed what used to feel like black-box ‘vibe-prompting’ into a disciplined, testable workflow for our AI team. Today we ship and iterate on AI features with real confidence about how any change will impact hundreds of thousands of customers."

Ian Chan

VP of Engineering at Postscript

"At Maze, we've learned great customer experiences come through intentional testing & iteration. Freeplay is building the tools companies like ours need to nail the details with AI."

Jonathan Widawski

CEO & Co-founder at Maze

"The time we’re saving right now from using Freeplay is invaluable. It’s the first time in a long time we’ve released an LLM feature a month ahead of time."

Luis Morales

VP of Engineering at Help Scout

"As soon as we integrated Freeplay, our pace of iteration and the efficiency of prompt improvements jumped—easily a 10× change. Now everyone on the team participates, and the out-of-the-box product-market fit for updating prompts, editing them, and switching models has been phenomenal."

Michael Ducker

CEO & Co-founder at Blaide

"Even for an experienced SWE, the world of evals & LLM observability can feel foreign. Freeplay made it easy to bridge the gap. Thorough docs, accessible SDKs & incredible support engineers made it easy to onboard & deploy – and ensure our complex prompts work the way they should."

Justin Reidy

Founder & CEO at Kestrel