OpenAI Dev Day: Our big takeaways

Nov 8, 2023

I (Ian) got to attend OpenAI DevDay in person Monday, and would have posted this sooner if we hadn’t spent the last 48 hours scrambling to support new features and talking to tons of people who are all eager to digest the announcements. It's been wild.

If you haven’t watched their keynote, it’s worth your time — feels like a ~once a decade tech keynote. The demo of the new Assistant API by Romain Huet at ~33 min will be talked about in developer circles for years to come. The OpenAI team are executing at an impressive speed & quality bar, and we saw a new horizon for where generative AI will take us all.

There were lots of interesting developer announcements, and we’ve already updated Freeplay to give our customers easy support for the new, cheaper versions of GPT 3.5 Turbo and GPT 4 Turbo with longer context windows. The announcement of custom agents — aka “GPTs”, including an app store model — will likely get even more attention.

Since the event, we’ve talked to ~a dozen leaders in different types of companies/different parts of the AI ecosystem about what OpenAI’s updates mean for them, and it felt worthwhile to reflect on a couple big ideas that have been top of mind. There are two big themes below:

  • Using AI: What do the announcements mean for ChatGPT & the wider ecosystem?

  • Building with AI: What do the announcements mean for software companies building with OpenAI APIs?

Using AI: A more powerful surface for consumers & business users

What do the announcements mean for ChatGPT & the wider ecosystem?

First, we got a glimpse of OpenAI’s vision for consumer AI, as well as the future of ChatGPT Enterprise for individual business users. With the announcement of GPTs and the corresponding app store with rev-share model for creators, the ChatGPT product will at least partially compete with anyone building a chatbot or agent UI as a standalone service. 

But, the judo move to not alienate those folks: OpenAI will attempt to harness their ecosystem’s creativity to make the product better. If developers or creators want to distribute a niche chatbot or agent via ChatGPT, they’ll have access to today’s 100M+ weekly active users, with a rev share that might be similar to the iOS App Store. Extending the App Store analogy, this makes ChatGPT equivalent to the iPhone as a platform for productivity.

If they’re successful, ChatGPT could become the front door for anything you want to get done — similar to how Google has been the front door for anything you wanted to know. Plan a weekend trip, manage your calendar, analyze your team’s budget, summarize happenings inside your company… All of them are possible now, and more will be possible soon as developers/creators (no code needed!) create & release new GPTs to the world.

In the near future:

  • First party product: Basic ChatGPT will solve most general needs with their first-party tools like web browsing, DALL-E 3 for creating images, and Code Interpreter for creating and running code to answer questions or analyze data. No third parties will be needed for the basics.

  • Third party GPTs: If you want to ask ChatGPT something that it can’t address on its own, more niche needs can be addressed by custom or community-created GPTs. These will even work in business contexts, using an updated version of Plugins called “Actions” that include the option to configure authenticated access to third party APIs. Want a revenue reporting tool to tell you about your quarter, or a Jira analysis tool to give you a bottom-up understanding of what’s happening across an engineering organization? GPTs will be able to help. (More examples here from Zapier.)

Also: What exactly is a “GPT?”

Several people have asked what a GPT is exactly, and how it’s different from a prompt, or the new Assistants API.

If you haven’t been close to this space, a GPT is a cocktail of ingredients that build on the idea of ChatGPT “Plugins” that were introduced back in March this year. Where Plugins provided a basic interface to other systems/APIs that could be enabled for use by ChatGPT, the new custom GPTs are like a full-featured UX that makes better use of external tools and knowledge.

The ingredients:

  • A set of instructions that can be saved & iterated on, and that can be used instead of the normal ChatGPT to address a task. 

  • Configurable access to default tools — web browsing, DALL-E 3, and Code Interpreter – so a new GPT can access the internet for up-to-date info, generate images, or process data.

  • External knowledge can be provided to them simply by uploading files (no code required!), for instance a PDF with relevant source material

  • “Actions” replace the concept of Plugins announced several months ago, and give ChatGPT a way to talk to third-party services via API. These can include authenticated APIs, for access to private data or services.

The core ChatGPT model then does the work to reason about which tools or external knowledge are needed to address each new ask, and are guided by the provided instructions.

And these can all be created by non-developers – here’s a screenshot of what it looks like to set one up.

Building with AI: What it means for developers & software companies

What do the announcements mean for software companies building with OpenAI APIs?

While ChatGPT may become a common front door for lots of general activity, OpenAI seems equally committed to helping software companies & established businesses build with their models and tools. If you want to make your application better, the new API updates will help.

What particularly stood out for enterprise developers?

  • Costs low enough to challenge open source: Not only was there a ~2.75x average cost decrease for a faster, more powerful GPT 4 Turbo with the longest context window of any commercial model (128K tokens), but in breakout sessions after the keynote, the OpenAI team explicitly encouraged people to use GPT-4 to bootstrap training data and then use it to fine-tune GPT-3.5-turbo to get even cheaper inference costs. For several of the people we’ve talked to in the past two days, it’s changed their calculus for considering open source models. When they think about total cost of ownership and performance tradeoffs, OpenAI suddenly seems like a more attractive choice, especially if latency isn't a concern and your request volumes are less than a couple million per month.

  • Developer lock-in: With the addition of features like multi-function calling, JSON mode, “reproducible outputs” and more, OpenAI has made their existing models look even more different from their competitors. The features are attractive for developers, but they also have a secondary strategic lock-in effect – it’s harder to swap and try another model like Anthropic or Llama 2 once developers start using these features. Other providers don't provide those features (yet).

  • Solving common infrastructure problems: The new Assistants API takes on huge chunks of the infrastructure issues that so many developers have been solving themselves. It's effectively the underlying API for building GPTs. How does it help? We’ve been surprised talking to agent developers who, rather than feel threatened by the new announcement, are actually encouraged because OpenAI has made their lives easier because of these things:

    • For developers building agents, calling tools is simplified – and OpenAI’s own tools Code Interpreter and Knowledge Retrieval (RAG) are can be used externally now for the first time.

    • For RAG systems (“retrieval augmented generation”), they’ve simplified the work required to create embeddings, figure out chunk sizes & retrieval ranking, etc. Now files can simply be uploaded for reference and accessed with the Knowledge Retrieval tool.

    • The Assistants API also solves conversation threading for developers and make it easy to build stateful assistants with knowledge of earlier parts of a conversation.

  • Indemnification for IP issues: A major announcement for larger businesses is that OpenAI will now provide indemnification related to IP challenges that arise from using its models, which they’re calling “Content Shield.” Until now, this has been at least one selling point for companies interested in Azure OpenAI – they could negotiate more enterprise-friendly terms with Microsoft. This is a key update on the legal side that will make using OpenAI directly an easier sell for larger businesses.


Those are initial highlights after 2 days of processing, and we’ll continue to dig into the updates with our team and others in the weeks to come.

We’re excited to bring support for more of these new features to Freeplay soon – stay tuned! If you’re curious to follow along, sign up for our newsletter.


© 228 Labs Inc. 2024