Skip to main content

    Migrating from OpenRouter

    OpenRouter gave you multi-provider access behind one key. Behest gives you that plus per-end-user identity, session memory, BYOK at wholesale rates, and browser-safe tokens. The migration itself is a baseURL change.


    Step 1: add your provider keys

    OpenRouter pays providers for you from their wallet. Behest supports two modes:

    1. BYOK — paste your OpenAI / Anthropic / Google keys in dashboard → Providers. You pay providers directly at their list prices, no markup.
    2. Behest-hosted — use Behest's pool at wholesale (currently OpenAI + Anthropic), billed on your Behest plan.

    You can mix: BYOK OpenAI, Behest-hosted Anthropic, etc. Behest routes per model.


    Step 2: swap the client

    Heads-up — Behest is not a one-header swap. OpenRouter accepts your long-lived sk-or-v1-... as the Bearer on /api/v1/chat/completions. Behest does not: Kong requires a minted JWT (short-lived, per end-user). Your behest_sk_live_... only authenticates the mint endpoint. Replacing the OpenRouter key with BEHEST_KEY directly in the OpenAI SDK will 401. Either use the v1.5 Behest SDK (which mints for you), or mint manually and pass the JWT.

    Node / TypeScript

    Before (OpenRouter):

    ts
    const client = new OpenAI({
      apiKey: process.env.OPENROUTER_API_KEY,
      baseURL: "https://openrouter.ai/api/v1",
    });

    After — recommended: v1.5 Behest SDK

    ts
    import { Behest } from "@behest/client-ts";
    const behest = new Behest({
      key: process.env.BEHEST_KEY, // behest_sk_live_...
      baseUrl: `https://${BEHEST_SLUG}.behest.app`,
    });
    // SDK calls /v1/auth/mint per request, injects the JWT, streams back.
    await behest.chat.completions.create({ messages, user_id: userId });

    After — raw OpenAI SDK: mint the JWT on the server, hand it to the client as apiKey.

    ts
    import OpenAI from "openai";
     
    const mint = await fetch(`https://${BEHEST_SLUG}.behest.app/v1/auth/mint`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${process.env.BEHEST_KEY}`,
      },
      body: JSON.stringify({ user_id: userId, session_id: crypto.randomUUID() }),
    }).then((r) => r.json());
     
    const client = new OpenAI({
      apiKey: mint.access_token, // JWT — NOT BEHEST_KEY
      baseURL: `https://${BEHEST_SLUG}.behest.app/v1`,
      defaultHeaders: { "X-Session-Id": mint.session_id },
    });

    Python

    python
    # Before
    client = OpenAI(api_key=OPENROUTER_KEY, base_url="https://openrouter.ai/api/v1")
     
    # After — using the v1.5 Behest SDK (mints for you):
    from behest import Behest
    behest = Behest()  # reads BEHEST_KEY + BEHEST_BASE_URL from env
    behest.chat.completions.create(messages=..., model="gpt-4o-mini", user_id=user_id)
     
    # After — using raw OpenAI SDK: mint first, pass the JWT.
    import uuid, requests
    from openai import OpenAI
    mint = requests.post(
        f"https://{SLUG}.behest.app/v1/auth/mint",
        headers={"Authorization": f"Bearer {BEHEST_KEY}"},
        json={"user_id": user_id, "session_id": str(uuid.uuid4())},
    ).json()
    client = OpenAI(
        api_key=mint["access_token"],              # JWT — NOT BEHEST_KEY
        base_url=f"https://{SLUG}.behest.app/v1",
        default_headers={"X-Session-Id": mint["session_id"]},
    )

    Model name mapping

    OpenRouter prefixes provider names (openai/gpt-4o, anthropic/claude-3.5-sonnet). Behest accepts either:

    OpenRouterBehest
    openai/gpt-4ogpt-4o or openai/gpt-4o
    openai/gpt-4o-minigpt-4o-mini
    anthropic/claude-3.5-sonnetclaude-3-5-sonnet-latest or anthropic/claude-3.5-sonnet
    google/gemini-2.5-flashgemini-2.5-flash
    meta-llama/llama-3.1-70b-instructvia OpenRouter BYOK — keep the same model string

    If you have OpenRouter as a BYOK provider in Behest, all openrouter/* models keep their original names — pass through unchanged.

    You can also set a project default model and drop the model field entirely; Behest fills it in.


    Feature parity

    OpenRouterBehest
    Multi-providerYesYes (BYOK + hosted)
    FailoverYesYes — configure fallback chain in dashboard
    Streaming (SSE)YesYes (OpenAI-compatible)
    Tool callsYesYes
    Structured outputsPartialYes (native to each provider's support)
    Per-end-user tokensNo (single key)Yes — this is the headline differentiator
    Session memoryNoYes (X-Session-Id / X-Thread-Id)
    Persisted threadsNoYes (GET /v1/threads/*)
    Per-user usageNoYes (/v1/billing/usage)
    GuardrailsNoYes (PII, prompt-injection)
    Pricing+5.5% markup on hostedBYOK: 0% markup. Hosted: platform plan fee

    Routing and fallback

    OpenRouter lets you specify a provider order per request:

    ts
    { "provider": { "order": ["OpenAI", "Azure"] } }

    Behest's equivalent is project-level fallback chains in the dashboard:

    • Primary: gpt-4o via your OpenAI BYOK
    • Fallback 1: claude-3-5-sonnet via Anthropic BYOK
    • Fallback 2: Behest-hosted gpt-4o

    Configure once; every request inherits the chain. Or pass model per request to pin a specific route.


    Headers and extras

    OpenRouter-specific headers (HTTP-Referer, X-Title) are ignored by Behest — not needed for attribution. Replace with:

    • X-Session-Id for ephemeral memory (set automatically by the SDK after behest.auth.mint(), or pass session_id per chat.completions.create call)
    • X-Thread-Id for persistent conversations
    • A per-user Behest JWT (via behest.auth.mint({ user_id })) for rate limits

    Common upgrade reasons

    • "My OpenRouter key leaked in a browser bundle" → Behest JWTs expire in 15 min and are scoped to one user. Not a catastrophic leak.
    • "One abusive user exhausts my rate limit" → Behest per-user RPM prevents this by construction.
    • "I need conversation history but don't want to store messages myself" → Use Behest threads.
    • "I want to show users their usage"GET /v1/billing/usage filtered by uid.

    See also

    Enterprise Token FinOps: Enforce hard budgets and attribute costs per session.

    Learn more