MCP server · Self-healing · Local-first

Give your AI agent a real browser.

Browser MCP and chrome-devtools-mcp work — until your agent clicks something with a randomized class name and the whole loop crashes. LumaBrowser is a desktop browser with an MCP server, self-healing selectors, and a token-cached page model. Plug it into Claude Desktop, Cursor, OpenClaw, or any MCP host in one config block.

Download LumaBrowser — free Show me the config
Free with telemetry · no API key required · works with local models via LM Studio or WebGPU

Three lines of JSON, one restart

LumaBrowser ships an MCP server. Add it to your claude_desktop_config.json (or your OpenClaw / Cursor MCP config), restart the host, and your agent gets browser tools.

{
  "mcpServers": {
    "lumabrowser": {
      "command": "lumabrowser",
      "args": ["--mcp"]
    }
  }
}
1

Install LumaBrowser. Download from the pricing page or run npx lumabrowser start.

2

Edit your MCP config. On Windows: %APPDATA%\Claude\claude_desktop_config.json. On macOS / Linux: ~/.config/Claude/claude_desktop_config.json.

3

Restart your MCP host. Your agent now has navigate, click, fill_form, screenshot, get_source, and a dozen other tools — all backed by a real Chromium tab on your machine.

The selector breaks. Your agent doesn't.

Modern frontends ship CSS-in-JS class names like css-8xk2m9 that change on every build. Most MCP browser tools delegate to Puppeteer or Playwright and crash with element not found. LumaBrowser's selector tools take an optional llmFallback — if the CSS misses, an LLM resolves the description against the live DOM and the call succeeds.

Without self-heal

The deploy ships, your agent dies

The frontend pushes a new build at 4pm. Class names change. Your agent loop crashes on the first click() and you find out from an alert at 8pm.

click({ selector: ".product-buy-btn" })
  → Error: element not found
With LumaBrowser

The fallback resolves it, the agent moves on

Pass an llmFallback alongside your selector. When the CSS misses, an LLM looks at the live DOM, picks the right element by description, and the call returns.

click({
  selector: ".product-buy-btn",
  llmFallback: "the primary buy button
                under the price"
})  → ok

Stop dumping raw HTML into your context

If your agent is reading entire DOM trees to figure out where to click, you're burning tens of thousands of tokens per turn. LumaBrowser's Template Builder runs once per domain, generates a generalized selector map, and caches it — so subsequent visits return a tiny structured map instead of the whole page.

~12,400~380
tokens per visit, first request vs. cached

~97% reduction on cached visits

The first time your agent hits a domain, the Template Builder analyzes the page and writes a generalized selector map to local SQLite. Every subsequent visit on that domain pulls the cached map — same structural output, ~30× fewer tokens. Multiply by every page your agent loops over and the math gets very different.

What your agent can do

A focused set of MCP tools, not a 60-tool kitchen sink. Every interaction tool that targets the DOM accepts an llmFallback description.

Navigate

create_tab · navigate · get_tabs · close_tab · wait_for

Open tabs, route them to URLs, wait on a selector or page state, close them when done. The agent gets a real tab list it can reason about, not a single sandboxed page.

Interact

click · fill_form · press_key · scroll · execute_js

Click, type, scroll, send keystrokes, run JS in page context. Selector-based tools fall back to an LLM description if the CSS misses, so a frontend redesign doesn't end the run.

Extract

get_source · get_element · screenshot · get_template

Grab cleaned page text, structured element properties, full PNG screenshots, or the cached generalized template. Three formats means the agent picks the cheapest one for the job.

Where LumaBrowser fits in the stack

There are a lot of ways to give an agent a browser right now. Here's how the local-desktop-app approach compares to the alternatives engineers usually try first.

Browser MCP / Claude in Chrome chrome-devtools-mcp Browserbase Browserless LumaBrowser
Where it runs Chrome / Edge extension Local Chrome via Puppeteer Vendor cloud only Vendor cloud or self-host Local desktop app
MCP host support Claude Desktop / Code only Any MCP host Custom SDK + MCP server HTTP / WS API, MCP via wrappers Any MCP host
Pricing model Free, gated to Anthropic Free, OSS Free 1 hr, then $20+/mo Free 1k units, then $25+/mo Free, no caps
Self-healing selectors No No (raw Puppeteer) Stagehand AI actions No LLM fallback per call
Token-cached page maps No No No No Template Builder
Local model support No (Anthropic only) Bring your own host Cloud-only Cloud-only LM Studio, WebGPU, OpenAI, Anthropic
Where the data lives Anthropic + your Chrome Your machine Their cloud Their cloud or yours Your machine

Pricing as listed by each vendor at time of publication. Cloud headless-browser pricing is metered — Browserbase by browser-hours, Browserless by 30-second “Units” — so cost scales with how often your agent loops. Higher tiers (Browserbase Startup at $99/mo, Browserless Starter at $140/mo annually) raise the included quota and concurrency.

This comparison reflects publicly available pricing and feature information gathered to the best of our knowledge from each vendor's public materials. Vendors update plans frequently and we're a small team — if anything here looks wrong, please email [email protected] with the correction and a source, and we'll update the page.

Compatibility & BYOK

Bring your own MCP host. Bring your own model.

Works as an MCP server with

  • Claude Desktop — tested daily; the canonical setup target
  • Cursor — via ~/.cursor/mcp.json
  • OpenClaw and other open MCP hosts
  • Custom agents — speak MCP over stdio with the standard SDK
  • WebDriver / CDP clients — the same browser exposes Selenium and CDP endpoints, so Puppeteer and Playwright connect via connectOverCDP

Bring your own model

  • LM Studio — point at http://localhost:1234/v1; fully local inference
  • Anthropic — paste an API key; full Claude model lineup supported
  • OpenAI — same; plus any OpenAI-compatible endpoint (Ollama, vLLM)
  • Local WebGPU — bundled extension runs models in the browser itself, no API keys

Stop losing agent runs to a class name change

LumaBrowser is the local desktop browser your agent already wishes you had wired up. Free to download, plugs into your existing MCP host, runs with the model you're already paying for — or no model at all if you want to drive it from a script.

Download LumaBrowser — free Read the MCP tool reference