How It Works
From URL to structured data — the full flow
Ev3ry follows a four-step process to go from a website URL to structured, validated data.
Step 1: Add a website
Create a website entry with the URL you want to extract from. Add a short description of what data you're after — this guides the agent's exploration. If the site requires authentication, attach a saved login.
Step 2: Define a schema
A schema is a JSON Schema object that describes the shape of the data you want back.
For example, a match schedule schema:
{
"type": "array",
"items": {
"type": "object",
"properties": {
"home_team": { "type": "string" },
"away_team": { "type": "string" },
"kickoff": { "type": "string", "format": "date-time" },
"venue": { "type": "string" },
"odds_home": { "type": "number" }
},
"required": ["home_team", "away_team", "kickoff"]
}
}
Schemas are defined once and can be reused across multiple websites. You can write them manually or paste in a template from the Schemas library.
Step 3: Run the extraction
Click Run on your website page. The AI agent takes over:
- Opens a browser session (local or cloud)
- Navigates to your website
- Explores the page — DOM, network requests, framework globals — to find the best data source
- Writes an extraction script and validates output against your schema
- Returns the structured data
You can watch the agent work in real time through the live view — see the browser, the agent's decisions, and data appearing as it's collected.
Step 4: Save a playbook
After a successful run, save the result as a playbook. This captures:
- The navigation steps the agent took
- The extraction script it wrote
- The schema it mapped to
Future runs using this playbook skip the AI exploration phase and go straight to extraction — deterministic, fast, and low-cost.
What comes after
- Download results as JSON or CSV
- Schedule recurring runs via cron expression or webhook trigger
- Chain extractions into a workflow when you need data from multiple websites in sequence