Give your AI team the tools they need
Give your AI team the tools they need
Get started with better tools for evals, experiments,
LLM observability and more.
Get started with better tools for evals, experiments,
LLM observability and more.
Startup
Discounts Available
Build AI products with best practices and best-in-class tools from day one.
Up to 50% off Growth and Enterprise plans.
We'll help you put together the package you need. Reach out to discuss.
Free
$0
/month
Start the data flywheel to continuously improve your AI applications.
Access to all Freeplay features
Access to all Freeplay features
Access to all Freeplay features
Unlimited users
Unlimited users
Unlimited users
Unlimited auto-evals
Unlimited auto-evals
Unlimited auto-evals
10,000 completions logged per month
10,000 completions logged per month
10,000 completions logged per month
1 project
1 project
1 project
10 test runs per month
10 test runs per month
10 test runs per month
$5 Freeplay credits
$5 Freeplay credits
$5 Freeplay credits
Growth
Starts at $500
/month*
Everything you need to build and improve AI in production.
Everything from Free
Everything from Free
Everything from Free
Unlimited users
Unlimited users
Unlimited users
Unlimited auto-evals
Unlimited auto-evals
Unlimited auto-evals
100,000 completions logged per month
100,000 completions logged per month
100,000 completions logged per month
5 projects
5 projects
5 projects
50 test runs per month
50 test runs per month
50 test runs per month
*Custom options available
*Custom options available
*Custom options available
Enterprise
Custom Pricing
For teams who need high volume, premium support and self-hosting.
Everything from Growth
Everything from Growth
Everything from Growth
500,000 completions per month
500,000 completions per month
500,000 completions per month
Self-hosting
Self-hosting
Self-hosting
SSO / SAML
SSO / SAML
SSO / SAML
SLAs
SLAs
SLAs
Bring your own models
Bring your own models
Bring your own models
Dedicated Forward Deployed AI Engineer
Dedicated Forward Deployed AI Engineer
Dedicated Forward Deployed AI Engineer
Bespoke training sessions and office hours
Bespoke training sessions and office hours
Bespoke training sessions and office hours
Choose the best plan for your team
Free
$0
/month
Growth
$500+
/month
Enterprise
Custom Pricing
Custom
Features
Evals
Monitoring & Observability
Offline Tests & Experiments
Prompt & Model Management
Customizable Playground
Dataset Management
Human Labeling & Review Workflows
Model Support
Common providers including OpenAI, Anthropic, Google Vertex, AWS Bedrock, Groq & more
Common providers including OpenAI, Anthropic, Google Vertex, AWS Bedrock, Groq & more
Common providers including OpenAI, Anthropic, Google Vertex, AWS Bedrock, Groq & more
Common providers including OpenAI, Anthropic, Google Vertex, AWS Bedrock, Groq & more
Common providers including OpenAI, Anthropic, Google Vertex, AWS Bedrock, Groq & more
Common providers including OpenAI, Anthropic, Google Vertex, AWS Bedrock, Groq & more
Fully customizable including self-hosted models
Fully customizable including self-hosted models
Fully customizable including self-hosted models
Team
Users
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Projects
1
1
1
Starts at 5
Starts at 5
Starts at 5
Unlimited
Unlimited
Unlimited
Usage
Auto-evals per month
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Unlimited
Completions per month
10,000
10,000
10,000
Starts at 100,000
Starts at 100,000
Starts at 100,000
Starts at 500,000
Starts at 500,000
Starts at 500,000
Test Runs per month
10
10
10
Starts at 50
Starts at 50
Starts at 50
Unlimited
Unlimited
Unlimited
Data Retention
90 Days
90 Days
90 Days
Custom
Custom
Custom
Custom
Custom
Custom
Support
Dedicated Slack Channel
Dedicated Forward Deployed Engineer (FDE)
Architecture & Development Assistance
Custom
Custom
Custom
Team Trainings & Office Hours
Platform
Deployment
Multi-tenant
Multi-tenant
Multi-tenant
Multi-tenant
Multi-tenant
Multi-tenant
Self-hosting, single tenant, or multi-tenant
Self-hosting, single tenant, or multi-tenant
Self-hosting, single tenant, or multi-tenant
SLAs for Uptime / Availability
Available
Available
Available
Custom SSO / SAML
Google Workspace only
Google Workspace only
Google Workspace only
Enterprise SAML/OIDC
Enterprise SAML/OIDC
Enterprise SAML/OIDC
Role-Based Access Control
Available
Available
Available
Automatic Cloud Data Replication
Available
Available
Available
Procurement
Access to SOC2 Type II Reports
Custom MSA & Data Privacy Agreement
Infosec / Security Review
Trial
Trial Period
n/a
n/a
n/a
14 days
14 days
14 days
30 days
30 days
30 days
Expert support from AI engineers who’ve been there
Expert support from AI engineers who’ve been there
Get hands-on support, training, and custom guidance from our AI engineers.
Get hands-on support, training, and custom guidance from our AI engineers.






AI teams ship faster with Freeplay
"Freeplay transformed what used to feel like black-box ‘vibe-prompting’ into a disciplined, testable workflow for our AI team. Today we ship and iterate on AI features with real confidence about how any change will impact hundreds of thousands of customers."
Ian Chan
VP of Engineering at Postscript
"At Maze, we've learned great customer experiences come through intentional testing & iteration. Freeplay is building the tools companies like ours need to nail the details with AI."
Jonathan Widawski
CEO & Co-founder at Maze
"The time we’re saving right now from using Freeplay is invaluable. It’s the first time in a long time we’ve released an LLM feature a month ahead of time."
Luis Morales
VP of Engineering at Help Scout
"As soon as we integrated Freeplay, our pace of iteration and the efficiency of prompt improvements jumped—easily a 10× change. Now everyone on the team participates, and the out-of-the-box product-market fit for updating prompts, editing them, and switching models has been phenomenal."
Michael Ducker
CEO & Co-founder at Blaide
"Even for an experienced SWE, the world of evals & LLM observability can feel foreign. Freeplay made it easy to bridge the gap. Thorough docs, accessible SDKs & incredible support engineers made it easy to onboard & deploy – and ensure our complex prompts work the way they should."
Justin Reidy
Founder & CEO at Kestrel
"Even for an experienced SWE, the world of evals & LLM observability can feel foreign. Freeplay made it easy to bridge the gap. Thorough docs, accessible SDKs & incredible support engineers made it easy to onboard & deploy – and ensure our complex prompts work the way they should. Copy"
Justin Reidy
Founder & CEO at Kestrel
AI teams ship faster with Freeplay
"Freeplay transformed what used to feel like black-box ‘vibe-prompting’ into a disciplined, testable workflow for our AI team. Today we ship and iterate on AI features with real confidence about how any change will impact hundreds of thousands of customers."

Ian Chan
VP of Engineering at Postscript
"At Maze, we've learned great customer experiences come through intentional testing & iteration. Freeplay is building the tools companies like ours need to nail the details with AI."

Jonathan Widawski
CEO & Co-founder at Maze
"The time we’re saving right now from using Freeplay is invaluable. It’s the first time in a long time we’ve released an LLM feature a month ahead of time."

Luis Morales
VP of Engineering at Help Scout
"As soon as we integrated Freeplay, our pace of iteration and the efficiency of prompt improvements jumped—easily a 10× change. Now everyone on the team participates, and the out-of-the-box product-market fit for updating prompts, editing them, and switching models has been phenomenal."

Michael Ducker
CEO & Co-founder at Blaide
"Even for an experienced SWE, the world of evals & LLM observability can feel foreign. Freeplay made it easy to bridge the gap. Thorough docs, accessible SDKs & incredible support engineers made it easy to onboard & deploy – and ensure our complex prompts work the way they should."

Justin Reidy
Founder & CEO at Kestrel
"Even for an experienced SWE, the world of evals & LLM observability can feel foreign. Freeplay made it easy to bridge the gap. Thorough docs, accessible SDKs & incredible support engineers made it easy to onboard & deploy – and ensure our complex prompts work the way they should. Copy"

Justin Reidy
Founder & CEO at Kestrel
AI teams ship faster with Freeplay
"Freeplay transformed what used to feel like black-box ‘vibe-prompting’ into a disciplined, testable workflow for our AI team. Today we ship and iterate on AI features with real confidence about how any change will impact hundreds of thousands of customers."
Ian Chan
VP of Engineering at Postscript
"At Maze, we've learned great customer experiences come through intentional testing & iteration. Freeplay is building the tools companies like ours need to nail the details with AI."
Jonathan Widawski
CEO & Co-founder at Maze
"The time we’re saving right now from using Freeplay is invaluable. It’s the first time in a long time we’ve released an LLM feature a month ahead of time."
Luis Morales
VP of Engineering at Help Scout
"As soon as we integrated Freeplay, our pace of iteration and the efficiency of prompt improvements jumped—easily a 10× change. Now everyone on the team participates, and the out-of-the-box product-market fit for updating prompts, editing them, and switching models has been phenomenal."
Michael Ducker
CEO & Co-founder at Blaide
"Even for an experienced SWE, the world of evals & LLM observability can feel foreign. Freeplay made it easy to bridge the gap. Thorough docs, accessible SDKs & incredible support engineers made it easy to onboard & deploy – and ensure our complex prompts work the way they should."
Justin Reidy
Founder & CEO at Kestrel
"Even for an experienced SWE, the world of evals & LLM observability can feel foreign. Freeplay made it easy to bridge the gap. Thorough docs, accessible SDKs & incredible support engineers made it easy to onboard & deploy – and ensure our complex prompts work the way they should. Copy"
Justin Reidy
Founder & CEO at Kestrel
Experiment, evaluate and observe in one platform
No more patchwork solutions. Freeplay lets your team run AI experiments, evaluate model performance, and monitor production in one place—without switching between tools.


Experiment, evaluate and observe in one platform
No more patchwork solutions. Freeplay lets your team run AI experiments, evaluate model performance, and monitor production in one place—without switching between tools.


Experiment, evaluate and observe in one platform
No more patchwork solutions. Freeplay lets your team run AI experiments, evaluate model performance, and monitor production in one place—without switching between tools.

