Skip to main content
Vibe coding tools let users describe an app in plain language and get back running code instantly. Your app handles the LLM interaction and UI, then uses E2B sandboxes to prepare and serve the generated app. Since the generated code never runs on your infrastructure, it can’t cause damage even if it’s buggy or malicious. For a complete working implementation, see Fragments — an open-source vibe coding platform you can try via the live demo.

Why E2B

  • Secure execution — AI-generated code runs in isolated sandboxes, not on your servers
  • Live preview URLs — each sandbox exposes a public URL you can embed in an iframe
  • Custom templates — pre-install frameworks like Next.js, Streamlit, or Gradio so sandboxes start instantly via templates
  • Multi-framework support — same API whether the generated app is React, Vue, Python, or anything else

Install the SDK

Fragments uses the E2B Code Interpreter SDK.
npm i @e2b/code-interpreter

Core Implementation

Your app orchestrates the flow from its own server — the sandbox is used purely to prepare and serve the generated code.

Create a sandbox from a template

Each sandbox starts from a template with the target framework pre-installed and a dev server already running. See the Next.js template example.
import { Sandbox } from '@e2b/code-interpreter'

const sandbox = await Sandbox.create('nextjs-app', {
  timeoutMs: 300_000,
})

Install dependencies and write code

Install any extra packages the LLM requested, then write the generated code to the sandbox filesystem.
// Install additional packages requested by the LLM
await sandbox.commands.run('npm install recharts @radix-ui/react-icons')

// Write the generated code
await sandbox.files.write('/home/user/pages/index.tsx', generatedCode)

Get the preview URL

The dev server picks up changes automatically. Retrieve the sandbox’s public URL and embed it in your frontend.
const host = sandbox.getHost(3000)
const previewUrl = `https://${host}`
// Embed previewUrl in an iframe for the user

Full example

A complete flow: LLM generates code, sandbox prepares and serves it. Simplified from Fragments.
import { Sandbox } from '@e2b/code-interpreter'
import OpenAI from 'openai'

// 1. Get code from the LLM
const openai = new OpenAI()
const response = await openai.chat.completions.create({
  model: 'gpt-5.2-mini',
  messages: [
    {
      role: 'system',
      content:
        'Generate a single Next.js page component using TypeScript and Tailwind CSS. Return only code, no markdown.',
    },
    { role: 'user', content: 'Build a calculator app' },
  ],
})
const generatedCode = response.choices[0].message.content

// 2. Create a sandbox and prepare the app
const sandbox = await Sandbox.create('nextjs-app', { timeoutMs: 300_000 })
await sandbox.files.write('/home/user/pages/index.tsx', generatedCode)

// 3. Return the preview URL
const previewUrl = `https://${sandbox.getHost(3000)}`
console.log('App is live at:', previewUrl)

// Later, when the user is done:
await sandbox.kill()