@built-in-ai

Introduction

Introduction to the @built-in-ai packages and Vercel AI SDK v6

If you've been experimenting with running local language models directly in the browser using Transformers.js, WebLLM, or the new Prompt API in Chrome/Edge, you're likely familiar with the challenges:

  • Custom hooks and UI components: Each framework requires its own integration patterns
  • Fallback complexity: Building robust integration layers to automatically fall back to server-side models when client-side compatibility is an issue
  • API fragmentation: Significant differences in API specifications across different in-browser LLM frameworks

API Fragmentation

The main issue is that the ways of using in-browser LLMs are fundamentally different:

  • Transformers.js introduces its own pipeline API, supporting a range of NLP, Computer Vision, Audio, and Multimodal tasks by leveraging ONNX Runtime
  • WebLLM provides an OpenAI-style API and leverages their own MLCEngine, WebGPU and WebAssembly for model execution
  • The Prompt API (utilizing Gemini Nano and Phi4-mini) offers native browser integration via the JavaScript LanguageModel namespace

Besides these API differences, it's also tricky to easily fall back to server-side models when the client device can't run local models due to hardware limitations (e.g., insufficient VRAM) or browser compatibility issues (e.g., WebGPU still not being implemented in some browsers).

Solution

The @built-in-ai packages solve this by providing unified Vercel AI SDK model providers for all of the above solutions. This architecture allows you to build local in-browser AI applications with the same developer experience.

import { streamText } from "ai";
import { builtInAI } from "@built-in-ai/core";
// or: import { transformersJS } from "@built-in-ai/transformers-js";
// or: import { webLLM } from "@built-in-ai/web-llm";

const result = streamText({
  model: builtInAI(), // change model provider here
  prompt: "Why is the sky blue?",
});

On this page