How Ev3ry Is Organized

The relationship between websites, schemas, playbooks, runs, and workflows

Ev3ry has five core objects. They form a hierarchy — each one builds on the previous.

Website
  └── Schema (what shape the output should have)
        └── Playbook (how to extract it)
              └── Run (a single extraction attempt)

Workflow (orchestrates multiple playbooks)

Website

A website is the top-level container. It represents a data source — a URL you want to extract from. Everything else lives under it.

A website holds:

  • A starting URL
  • A description that guides the AI agent
  • Associated schemas and playbooks
  • A run history
  • An optional saved login for authenticated pages

Schema

A schema defines the shape of the data you want returned. It's a JSON Schema object — the same standard used by APIs everywhere.

Schemas are independent of how data is extracted. You define what you want; the agent figures out how to get it.

You can define schemas globally (under Schemas in the sidebar) and attach the same schema to multiple websites. This is useful when you extract the same structure from different sources.

Playbook

A playbook captures a proven extraction method. It records:

  • The navigation steps the agent took (clicks, scrolls, waits)
  • The extraction script it wrote
  • The schema it mapped to

The relationship is: one website + one schema → one playbook.

Playbooks exist because AI-driven extraction is thorough but slow. The agent needs to analyze the page, explore multiple data sources, and write and validate custom scripts. A playbook saves all of that work so future runs can replay it directly.

First run (no playbook)Subsequent runs (with playbook)
Browser navigationAI decides where to goReplays saved steps
Extraction scriptAI writes from scratchReuses saved script
ValidationAI inspects and retriesScript runs deterministically
SpeedSlower — full explorationFast — straight to extraction
LLM costHigherMinimal

If a playbook encounters an unexpected page state during replay, it falls back to the AI agent to recover.

Run

A run is a single execution of an extraction — either AI-driven (first time) or playbook-driven (replay).

Every run goes through a lifecycle:

Queued — waiting for a browser slot

Running — the agent or playbook is executing in a live browser. Watch in real time via the live view.

Completed — data extracted and validated successfully. Download as JSON/CSV or save as a playbook.

Failed — something went wrong (page changed, login expired, timeout). The error and any partial results are preserved for debugging.

Workflow

A workflow connects multiple playbooks into an automated pipeline. Use workflows when you need data from several websites in sequence, or when one extraction's output feeds the next.

Workflows can be triggered:

  • Manually — click Run in the UI
  • On a schedule — cron expression (e.g., daily at 6 AM)
  • Via webhook — an external system sends an HTTP request

How they connect

An example — monitoring match odds across two betting sites:

  1. Website A and Website B — two data sources
  2. Schema "Match Odds" — { home_odds, draw_odds, away_odds, match }
  3. Playbook A — extracts from Site A using the Match Odds schema
  4. Playbook B — extracts from Site B using the same schema
  5. Workflow — runs both playbooks daily and combines results

Each layer adds reusability. Define a schema once, create a playbook once, and the workflow handles scheduling and orchestration.