How to Migrate from the OpenAI SDK to Vercel AI SDK

We shipped our migration from the OpenAI SDK to the Vercel AI SDK on a Thursday afternoon. By Friday morning, every LLM call in production was failing. The culprit was a Zod .url() schema that the AI SDK’s OpenAI provider silently rejected. We reverted the entire migration, spent a day fixing compatibility issues, and re-deployed the following week.

That revert taught me more about the migration than the migration itself did.

If you’re considering moving from direct OpenAI SDK calls to the Vercel AI SDK, this is the Vercel AI SDK migration guide I wish existed when I went through it. It covers every pattern you’ll need to transform, the Zod pitfalls that aren’t documented anywhere, and how to avoid our production incident. I’ve included before/after code for every major pattern, because that’s what I needed and couldn’t find.

Why Migrate Away from the OpenAI SDK?

Honestly: you probably don’t need to. If you’re only ever going to call OpenAI, the direct SDK works fine. The migration creates real work and introduces real risk, as we learned firsthand.

But there are two reasons that tipped the scale for us.

Provider Lock-in Compounds Over Time

Every direct openai.chat.completions.create() call is a coupling point. When we wanted to test Anthropic’s Claude on a subset of our analysis pipeline, we realized we’d need to rewrite every call site. The AI SDK’s provider pattern means swapping openai("gpt-4o") for anthropic("claude-sonnet-4-20250514") is a one-line change. Same interface, different provider.

This is effectively a partial implementation of the facade pattern, of which I’m a big believer.

This matters less on day one and more on day 300, when you’ve got 40 LLM call sites and a new model vendor offers better price-performance for your use case.

The Assistants API Is Shutting Down

OpenAI announced the Assistants API deprecation with a shutdown date of August 26, 2026. If you’re using Assistants, you need a migration path regardless. OpenAI’s own replacement is the Responses API, but the AI SDK offers a provider-agnostic alternative. Migrating your other LLM calls at the same time reduces the total number of migrations you’ll do.

What the AI SDK Gives You

At its core, the Vercel AI SDK is a TypeScript-first abstraction over LLM providers. The three functions you’ll use most:

  • generateText for single completions
  • streamText for streaming responses
  • generateObject for structured output with Zod validation

Each function accepts a model parameter from any registered provider. That’s the entire value proposition: same interface, any provider, type-safe output.

Before You Start: Migration Prerequisites

Package Changes

Remove the OpenAI SDK and install the AI SDK packages:

# Remove direct OpenAI dependency
npm uninstall openai

# Install AI SDK core and OpenAI provider
npm install ai @ai-sdk/openai

If you’re using other providers, install those too: @ai-sdk/anthropic, @ai-sdk/google, and so on. The providers list covers 20+ options.

TypeScript and Zod Compatibility

This is where our migration broke, so pay attention.

The AI SDK relies heavily on Zod for schema validation, especially in tool definitions and structured output. If you’re already using Zod, check your version. The AI SDK works best with Zod 3.22+, but certain Zod schema methods cause issues with the OpenAI provider specifically. If you’re also dealing with module system changes, I covered the TypeScript side of that in my CJS to ESM migration guide.

I’ll cover the exact failure in detail later, but the short version: audit your Zod schemas before you start. If any of them use .url(), .email(), or .transform(), you’ll need to adjust them.

Converting Chat Completions to generateText

This is the most common pattern you’ll transform. Here’s the direct comparison.

Before: OpenAI SDK

import OpenAI from "openai";

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: userMessage },
  ],
  temperature: 0.7,
});

const text = response.choices[0].message.content;

After: AI SDK

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

const { text } = await generateText({
  model: openai("gpt-4o"),
  system: "You are a helpful assistant.",
  prompt: userMessage,
  temperature: 0.7,
});

The differences worth noting:

The system message is a top-level parameter, not part of the messages array. For multi-turn conversations, you still use a messages array, but the system prompt is always separate.

The response is destructured directly. No choices[0].message.content chain. The AI SDK returns { text, usage, finishReason } at the top level.

The API key comes from the OPENAI_API_KEY environment variable by default. No explicit configuration needed unless you’re using a custom base URL or multiple API keys.

Switching Streaming Responses to streamText

Streaming is where the AI SDK starts to feel like a genuine improvement over the raw OpenAI SDK! The old pattern required manual iteration over chunks. The new pattern handles it more cleanly.

Before: OpenAI SDK

const stream = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: userMessage }],
  stream: true,
});

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content || "";
  process.stdout.write(content);
}

After: AI SDK

import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const result = streamText({
  model: openai("gpt-4o"),
  prompt: userMessage,
});

for await (const textPart of result.textStream) {
  process.stdout.write(textPart);
}

Cleaner, but the real value shows up in web applications. The AI SDK provides result.toDataStreamResponse() that integrates directly with Next.js API routes or any standard Response-based framework. No manual Server-Sent Events formatting. This helped us a ton in our migration, as we were using OpenAI in multiple places across the NextJS app, and server packages alike.

Watch Out for Runtime Differences

If you’re deploying to Vercel or any edge-based platform, test streaming in production early. There’s a well-documented gap between “streaming works locally” and “streaming gets cut off on the edge runtime.” This isn’t an AI SDK bug per se, but the migration is a good time to catch it.

For Node.js runtimes, streaming works as expected. I’d recommend starting your migration in a Node.js environment and dealing with edge runtime issues separately.

This took us way too long to realize, and it was causing weird behaviours nobody could explain.

Replace OpenAI Function Calling with Vercel AI SDK Tools

This was the second most time-consuming part of our migration, after the Zod issues. To replace OpenAI function calling, the AI SDK uses Zod-schema-based tools with built-in execution.

Previously we had to build all sorts of manual schema validation and fail things early if the proper response wasn’t given by the model. AI SDK handles that intuitively!

Before: OpenAI SDK

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What's the weather in Copenhagen?" }],
  functions: [
    {
      name: "get_weather",
      description: "Get the current weather for a location",
      parameters: {
        type: "object",
        properties: {
          location: { type: "string", description: "City name" },
          unit: { type: "string", enum: ["celsius", "fahrenheit"] },
        },
        required: ["location"],
      },
    },
  ],
});

const functionCall = response.choices[0].message.function_call;
if (functionCall) {
  const args = JSON.parse(functionCall.arguments);
  // Execute the function...
}

After: AI SDK

import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const result = await generateText({
  model: openai("gpt-4o"),
  prompt: "What's the weather in Copenhagen?",
  tools: {
    get_weather: tool({
      description: "Get the current weather for a location",
      parameters: z.object({
        location: z.string().describe("City name"),
        unit: z.enum(["celsius", "fahrenheit"]).optional(),
      }),
      execute: async ({ location, unit }) => {
        // Execute the function directly here
        return await fetchWeather(location, unit);
      },
    }),
  },
});

The improvement is substantial. The JSON Schema definition is replaced with a Zod schema. The argument parsing is handled automatically, with type safety. The execute function runs inline, so there’s no manual dispatch logic.

But here’s the trade-off: your tool definitions are now coupled to Zod. Every schema must be Zod-compatible with the AI SDK’s serialization layer. And that’s exactly where our production migration failed.

The Zod .url() Trap

This section exists because I spent hours debugging an issue that no blog post, tutorial, or official guide mentions. If you search for this, you’ll mostly find GitHub issues.

What Happened

Our codebase used Zod schemas to validate LLM outputs for a political analysis engine. Several of these schemas included .url() validators for source links and reference URLs:

const sourceSchema = z.object({
  title: z.string(),
  url: z.url(),       // This is the problem
  summary: z.string(),
});

Locally, everything worked. In production, every LLM call that used these schemas threw silent validation errors. The AI SDK’s OpenAI provider converts Zod schemas to JSON Schema for the API request, and .url() doesn’t serialize the way you’d expect through the provider layer.

The Fix

Replace .url() with .string():

const sourceSchema = z.object({
  title: z.string(),
  url: z.string(),     // Works with the OpenAI provider
  summary: z.string(),
});

If you need URL validation, do it after the LLM response, not in the schema definition:

const result = await generateObject({
  model: openai("gpt-4o"),
  schema: sourceSchema,
  prompt: "Find sources for this topic...",
});

// Validate URLs after generation, not during
const validSources = result.object.sources.filter(
  (s) => z.string().url().safeParse(s.url).success
);

This applies to .email() and .transform() as well. Any Zod method that adds refinements beyond the base JSON Schema types can cause issues when the AI SDK serializes your schema for the provider.

Audit Your Schemas Before Migrating

Run a quick grep across your codebase:

grep -r "z\.url\(\)\|z\.email\(\)\|\.transform(" --include="*.ts" src/

Every match is a potential production failure. Fix them before you deploy.

Migrating Structured Output with Vercel AI SDK and Zod

If you’re using OpenAI’s response_format for structured output, the AI SDK’s generateObject with Zod schemas is a direct upgrade.

Before: OpenAI SDK

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: prompt }],
  response_format: {
    type: "json_schema",
    json_schema: {
      name: "analysis",
      schema: {
        type: "object",
        properties: {
          sentiment: { type: "string", enum: ["positive", "negative", "neutral"] },
          confidence: { type: "number" },
          reasoning: { type: "string" },
        },
        required: ["sentiment", "confidence", "reasoning"],
      },
    },
  },
});

const analysis = JSON.parse(response.choices[0].message.content);

After: AI SDK

import { generateObject } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const { object: analysis } = await generateObject({
  model: openai("gpt-4o"),
  schema: z.object({
    sentiment: z.enum(["positive", "negative", "neutral"]),
    confidence: z.number(),
    reasoning: z.string(),
  }),
  prompt: prompt,
});

No JSON.parse. No manual JSON Schema definition. The Zod schema is the single source of truth for both validation and the API request. analysis is fully typed based on your schema.

This is the pattern where the AI SDK shines most. If you’re doing any kind of structured extraction from LLM outputs, the improvement in developer experience is significant.

The fact that I was able to remove several hundred lines of code of manual error handling, json parsing, schema validation and the like, was such a big boon for us. It was a noticeable quality of life change that reduced lines of code, increased simplicity and gave a better result.

The Migration That Broke Production

I want to be direct about what happened, because I think migration guides that hide the hard parts aren’t worth much 😄

We migrated all LLM calls across our entire codebase in a single PR (except for a few select areas where we needed certain APIs that the AI SDK didn’t cover yet). The reasoning was straightforward: the AI SDK functions are direct replacements, the test suite passed, and running two SDKs in parallel felt like unnecessary complexity.

That reasoning was wrong.

The migration passed all tests locally and in CI. We deployed on a Thursday. By Friday morning, our political analysis engine, which runs nightly batch analysis on legislation, had failed completely. Every analysis job that involved Zod schemas with .url() validators was throwing errors.

The issue didn’t surface in tests because our test fixtures used hardcoded responses. The Zod serialization only fails when the schema is sent to the actual OpenAI API, meaning local tests with mocked responses pass just fine.

We reverted the entire PR on Friday. Over the weekend, I audited every Zod schema in the codebase, converted all .url() calls to .string(), and added a post-generation validation layer. We re-deployed the following Monday without issues.

What I’d Do Differently

  1. Migrate one call site at a time, not the entire codebase at once (this one really seems obvious in hindsight, but when things look easy on the surface you easily get caught up in it)
  2. Run integration tests against the real API for at least one representative call per schema pattern
  3. Audit Zod schemas before starting, not after production breaks. I wish I had known about this/these changes in advance.
  4. Keep both SDKs installed during transition, removing the old one only after the new one is proven in production

This is the same lesson from upgrading Prisma to the rust-free client: incremental migration beats big-bang rewrites. And if you’ve been through the CI stability debugging I wrote about, you’ll recognize the pattern: the constraint you don’t test for is the one that breaks.

Running Both SDKs During Transition

Based on our experience, I’d recommend keeping both openai and ai packages installed during the migration. This feels redundant, but it lets you migrate incrementally.

The approach is simple. Pick one endpoint or service that makes LLM calls. Migrate it. Deploy. Verify it works in production for a few days. Then migrate the next one.

// During migration: both imports coexist
import { generateText } from "ai";          // New calls
import { openai } from "@ai-sdk/openai";    // New provider
import OpenAI from "openai";                 // Legacy, for unmigrated calls

const legacyClient = new OpenAI();           // Still used by unmigrated code

Once every call site is migrated and stable in production, remove the openai package entirely. For our codebase, the full transition took about two weeks, though most of that was waiting to confirm production stability between batches.

Adding Observability Post-Migration

One thing the AI SDK doesn’t give you out of the box is observability into your LLM calls. After our migration, we added Braintrust tracing to every generateText and streamText call.

This was partly motivated by the production incident. When things broke, we had no visibility into what the AI SDK was sending to OpenAI versus what we expected. Additionally, as we are building out more features, we need an overview of which parts of the application costs us the most, and how much. Tracing gives you request/response logs, latency breakdowns, and token usage per call, in addition to cost (estimates).

The AI SDK supports middleware-style wrappers that make this straightforward:

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

const result = await generateText({
  model: openai("gpt-4o"),
  prompt: userMessage,
  experimental_telemetry: {
    isEnabled: true,
    functionId: "political-analysis",
  },
});

If you’re running LLM calls in production, especially batch jobs or anything customer-facing, add tracing before or during the migration. You’ll want it when something goes wrong.

Migration Checklist

Before you start:

  • Audit Zod schemas for .url(), .email(), .transform() usage
  • Install ai and @ai-sdk/openai alongside existing openai package
  • Verify your Zod version is 3.22+
  • Set up LLM call tracing or logging

For each call site:

  • Replace openai.chat.completions.create() with generateText or streamText
  • Replace functions parameter with Zod-based tools
  • Replace response_format with generateObject
  • Test against the real API, not just mocks
  • Deploy and verify in production before moving to the next call site

After completing all migrations:

  • Remove the openai package
  • Remove any legacy client initialization
  • Configure error handling and maxRetries options (the AI SDK has built-in retry logic)
  • Verify token usage reporting still works (the AI SDK reports usage differently)

The Trade-off

The Vercel AI SDK introduces a layer of abstraction between your code and the LLM provider. That abstraction costs you: Zod compatibility constraints, a new API surface to learn, and a dependency on Vercel’s SDK roadmap. And then there are a few APIs that are unsupported through the AI SDK, such as OpenAIs background mode.  This may have changed since we migrated.

What it gives back: provider flexibility, cleaner TypeScript types, structured output without manual JSON parsing, and a streaming interface that doesn’t require manual chunk assembly.

For a codebase with more than a handful of LLM calls, especially one that might need to switch providers or support multiple models, the trade-off is worth it. For a single script calling GPT-4, the direct SDK is simpler and that’s fine.

If you’re ready to migrate from the OpenAI SDK to the Vercel AI SDK, start with one endpoint. Migrate it, deploy it, watch it for a week. If the experience feels like an improvement, continue. If it feels like unnecessary indirection, the direct SDK isn’t going anywhere.

Start with the pattern that matters most to your codebase and work outward from there. And check your Zod schemas first. Trust me on that one.