RESOLVEkit

Embedded LLM Support Agent That Can Actually Act

Developers integrate the ResolveKit SDK into mobile or web apps. Users get embedded chat. Your LLM agent understands the app, including screen layout and flow from docs, guide images, and submitted screenshots, explains fixes clearly, and can invoke approved on-device functions to resolve issues.

Free (for now). Pay us in feedback.

Live Command Trace

Embedded SDK Support Session

Context loading

user

I opened the assistant chat and the timeline is still loading forever.

agent

I checked app context: iOS 17.4, timeline index drift, and recent sync lag in activity history.

Ask about an in-app issue...
Send

Product-aware by default

Agents are grounded in app behavior, docs, troubleshooting paths, and visual references so they understand UI layout from guide images and screenshots.

SDK-embedded assistance

Developers ship chat inside mobile/web surfaces with one SDK and one backend control plane.

Tool execution with consent

Agent proposes on-device actions, asks for approval, and records the full execution trace.

How To Integrate

Integrate ResolveKit iOS SDK in 3 steps

This walkthrough is based on the real `playbook-ios-sdk` APIs and runtime lifecycle.

iOS SDK is available now. Android, Next.js, React, React Native, and Flutter SDKs are coming soon.

Active snippet

Define your tool functions with @ResolveKit

Function Source
Runtime resolves inline functions + function packs.Function names must stay unique across all sources.
import PlaybookCore

@ResolveKit(name: "set_lights", description: "Turn lights on or off in a room", timeout: 30)
struct SetLights: PlaybookFunction {
    func perform(room: String, on: Bool) async throws -> String {
        let brightness = on ? 100 : 0
        return "Set \(room) lights to \(brightness)%"
    }
}

@ResolveKit(name: "get_weather", description: "Get current weather for a city", timeout: 10)
struct GetWeather: PlaybookFunction {
    func perform(city: String) async throws -> String {
        "\(city): sunny, 22°C"
    }
}

Use stable snake_case names because backend stores tools by function name.

Each function must be async throws and implemented on a struct.

Trust and Control

Per-function approval policy with full execution visibility

Keep approvals where risk exists and skip unnecessary prompts for low-risk info retrieval functions.

Phase 1

Set policy

Developers mark each function as approval-required or auto-run for safe read-only/info fetch actions.

Phase 2

Propose

Agent explains why a function call can resolve the issue.

Phase 3

Approve

Sensitive actions pause for explicit user approval directly inside chat.

Phase 4

Resolve

Approved or auto-run actions execute, then results and trace data are recorded.

Operational clarity

Every turn is traceable

See decisions, retrieved context, approvals, function payloads, and final outcomes per session.

Developer speed

One dashboard, all app configs

Manage prompts, functions, limits, languages, and chat behavior without shipping app updates for each tweak.

Safer automation

Guardrails before actions

Function eligibility can be constrained by platform, app version, and custom session fields.

Visual understanding

Learns UI layout from images

Ingestion can process relevant guide images and screenshots so the agent can reason about where things are and how flows move.

Operator Command

Control prompts, functions, limits, and session traces from one dashboard

Keep assistant behavior consistent across every app surface while still adapting to platform, version, real-time session context, and knowledge-base vision mode (OCR-safe or full multimodal).