@built-in-ai
@built-in-ai/core

Usage

Features and usage examples for @built-in-ai/core with AI SDK v6

Basic Text Generation

Streaming Text

import { streamText } from "ai";
import { builtInAI } from "@built-in-ai/core";

const result = streamText({
  model: builtInAI(),
  prompt: 'Invent a new holiday and describe its traditions.',
});

for await (const textPart of result.textStream) {
  console.log(textPart);
}

Non-streaming Text

import { generateText } from "ai";
import { builtInAI } from "@built-in-ai/core";

const result = await generateText({
  model: builtInAI(),
  prompt: 'Invent a new holiday and describe its traditions.',
});

console.log(result.text);

Text Embeddings

Generate text embeddings using browser-native embedding capabilities:

import { embed, embedMany } from "ai";
import { builtInAI } from "@built-in-ai/core";

// Single embedding
const { embedding, usage } = await embed({
  model: builtInAI.embedding("embedding"),
  value: "Hello, world!",
});

// Multiple embeddings
const { embedding, usage } = await embedMany({
  model: builtInAI.embedding("embedding"),
  values: ["Hello", "World", "AI"],
});

Download Progress Tracking

When using built-in AI models for the first time, the model needs to be downloaded. Track progress to improve UX:

import { streamText } from "ai";
import { builtInAI } from "@built-in-ai/core";

const model = builtInAI();
const availability = await model.availability();

if (availability === "unavailable") {
  console.log("Browser doesn't support built-in AI");
  return;
}

if (availability === "downloadable") {
  await model.createSessionWithProgress((progress) => {
    console.log(`Download progress: ${Math.round(progress * 100)}%`);
  });
}

// Model is ready
const result = streamText({
  model,
  prompt: 'Invent a new holiday and describe its traditions.',
});

Multimodal Support

The Prompt API supports both images and audio files (currently only Chrome):

import { streamText } from "ai";
import { builtInAI } from "@built-in-ai/core";

const result = streamText({
  model: builtInAI(),
  messages: [
    { // Image
      role: "user",
      content: [
        { type: "text", text: "What's in this image?" },
        { type: "file", mediaType: "image/png", data: base64ImageData },
      ],
    },
    { // Audio
      role: "user",
      content: [{ type: "file", mediaType: "audio/mp3", data: audioData }],
    },
  ],
});

for await (const chunk of result.textStream) {
  console.log(chunk);
}

Tool Calling

The builtInAI model supports tool calling with multi-step execution:

import { streamText, stepCountIs } from "ai";
import { builtInAI } from "@built-in-ai/core";
import { z } from "zod";

const result = await streamText({
  model: builtInAI(),
  messages: [{ role: "user", content: "What's the weather in San Francisco?" }],
  tools: {
    weather: tool({
      description: 'Get the weather in a location',
      inputSchema: z.object({
        location: z.string().describe('The location to get the weather for'),
      }),
      execute: async ({ location }) => ({
        location,
        temperature: 72 + Math.floor(Math.random() * 21) - 10,
      }),
    }),
  },
  stopWhen: stepCountIs(5), // multiple steps
});

It also supports tool execution approval (needsApproval).

Tool Calling with Structured Output

import { Output, ToolLoopAgent, tool } from "ai";
import { builtInAI } from "@built-in-ai/core";
import { z } from "zod";

const agent = new ToolLoopAgent({
  model: builtInAI(),
  tools: {
    weather: tool({
      description: "Get the weather in a location",
      inputSchema: z.object({ city: z.string() }),
      execute: async ({ city }) => {
        // ...
      },
    }),
  },
  output: Output.object({
    schema: z.object({
      summary: z.string(),
      temperature: z.number(),
      recommendation: z.string(),
    }),
  }),
});

const { output } = await agent.generate({
  prompt: "What is the weather in San Francisco and what should I wear?",
});

Structured Output

Generate structured JSON output with schema validation:

Using generateText

import { generateText } from "ai";
import { builtInAI } from "@built-in-ai/core";
import { z } from "zod";

const { output } = await generateText({
  model: builtInAI(),
  output: Output.object({
    schema: z.object({
      recipe: z.object({
        name: z.string(),
        ingredients: z.array(
          z.object({ name: z.string(), amount: z.string() }),
        ),
        steps: z.array(z.string()),
      }),
    }),
  }),
  prompt: "Generate a lasagna recipe.",
});

Using streamText

import { streamText } from "ai";
import { builtInAI } from "@built-in-ai/core";
import { z } from "zod";

const { partialOutputStream } = streamText({
  model: builtInAI(),
  output: Output.object({
    schema: z.object({
      recipe: z.object({
        name: z.string(),
        ingredients: z.array(
          z.object({ name: z.string(), amount: z.string() }),
        ),
        steps: z.array(z.string()),
      }),
    }),
  }),
  prompt: 'Generate a lasagna recipe.',
});

On this page