API for integrating snapshots into your systems and pipelines
The snapshotarchive dashboard is fine for manual work — add a URL, look at the changes, download a PDF. But the moment your snapshot archive becomes part of something larger (a monthly client report, a legal process, a support alerting system), clicking through the interface stops being a workable answer.
So everything you can do in the dashboard, you can also do through the REST API. Create projects and monitors, trigger snapshots outside the schedule, pull PDFs, PNGs, HTML, or the whole thing as a single ZIP archive. Get diffs back with the coordinates of changed regions. Catch alerts through a webhook straight into Slack or your backend. All of it on paid plans starting with Starter.
What the API actually solves
The API isn't an end in itself. It covers specific tasks that are awkward or impossible to do in the dashboard:
Automated client reports at agencies. Once a month a cron job pulls every snapshot for a client's projects, packages them with
GET /snapshots/{id}/package, and drops them into Google Drive or emails them with a generated link.Triggering capture from events in your own system. A release goes out, a content editor saves a change, a deploy completes —
POST /monitors/{id}/triggerqueues a snapshot outside the regular schedule, and a few seconds later you have a record of production right after the change.Visual change alerts in Slack or an internal tool. Set
alert_webhook_urloralert_slack_webhook_urlwhen creating a monitor and we POST to it whenever the visual diff crosses the threshold. No polling, no separate notification system.Legal and compliance pipelines. A weekly script grabs the latest PDFs with certificates and SHA-256 hashes for the company's public pages (Terms, Privacy Policy, rate cards) and stores them in a secured archive. The legal team gets a structured folder, not a "remember to log in to the dashboard" calendar reminder.
Programmatic version comparison.
GET /diffs/{id}returns the visual image plus the coordinates of changed regions, the change percentage, and links to the before/after snapshots. Enough to build your own UI, or wire it into Jira or Linear so every significant change becomes a ticket.
What the API actually consists of
Below is the high-level picture. The full reference lives separately, with cURL, PHP, JavaScript, and Python examples for everything.
The API is built around four resources:
Projects — containers that group monitors. Usually one per site or per client.
Monitors — the URLs being tracked, with their settings: viewport, capture frequency, diff threshold, which selectors to hide (cookie banners, for example), where to send alerts.
Snapshots — individual captures, each with a UUID, a status (pending, processing, completed, or failed), the server's HTTP status, response headers, console errors, response time, and page weight. Every snapshot carries three files: the PNG screenshot, the PDF with its certificate, and a gzip archive of the original HTML.
Diffs — visual differences between two consecutive snapshots of the same monitor. With region coordinates, total change percentage, and a significance flag.
Authentication is a Bearer token in the Authorization header. Keys are created and revoked from the dashboard, and the limit on how many you can have depends on the plan. The rate limit is 60 requests per minute per key, with X-RateLimit-Limit and X-RateLimit-Remaining headers in every response. Errors come back in a single format: an error object with code, message, and status fields.
Webhooks instead of polling
Instead of asking the API once a minute "anything new yet?", you set a webhook URL on the monitor itself. When the system records a significant visual change (a diff that exceeds the monitor's configured threshold), we POST to the URL you gave us.
You get two webhook fields on each monitor: a regular webhook (any endpoint of yours) and a Slack incoming webhook. Use one, use both, doesn't matter. Slack receives a ready-to-read message with before and after screenshots and a link to the diff in the dashboard. Your own webhook gets JSON with the diff data, which you handle however you want — open a ticket, raise an alert, kick off additional checks.
When the API makes sense and when it's overkill
The API is worth using if you:
work with dozens or hundreds of URLs and the dashboard becomes the bottleneck;
want snapshots wired into existing pipelines: CI/CD, Slack, Jira, an internal dashboard, client reports;
need to trigger captures from events in your own system, not only on a schedule;
need programmatic access to diffs to build your own change analytics.
The API isn't worth it if you're using snapshotarchive for two of your own sites and checking changes manually once a week. The dashboard handles that case faster, and there's not much point in writing automation around something you'd click twice a week anyway.
Where the limits are and what to know upfront
A few practical points.
The rate limit is the same on every paid plan: 60 requests per minute per key. For most pipelines that's plenty, but if you're planning a bulk pull (say, fetching a thousand old snapshots in one run), you'll need to batch with the limit in mind. The X-RateLimit-Remaining header in the response makes a backoff easy to build.
The API uses 302 redirects to signed URLs for file downloads — for PDFs, PNGs, HTML, and diff images. Run cURL with -L; most HTTP clients follow redirects by default. The exception is GET /snapshots/{id}/package, which returns the ZIP archive directly in the response body.
Snapshots are created asynchronously. After POST /monitors or POST /monitors/{id}/trigger, the snapshot is queued and processed in the background. To know when it's ready, poll GET /snapshots/{id} and watch the status field move through pending, processing, then completed or failed. Most snapshots are done in 5–15 seconds.
Retention depends on the plan: 90 days on Pro, less on smaller tiers. Old snapshots are deleted automatically. If you need to keep an archive for longer, pull the files through /package and store them yourself. The PDF with its certificate and SHA-256 hash stays valid no matter where you keep it.
What this gives you in the end
snapshotarchive works as a hands-on service if that's what you need. The dashboard handles it. The API exists for the case where it doesn't — archives that have to land in your storage on their own, alerts that need to reach the on-call without anyone sitting on the dashboard, snapshots that fire after a deploy because the deploy itself triggered them. Same product, but it stops being a tab in your browser and becomes one of the boring reliable pieces in the back.
The fastest way to evaluate is to take the documentation, create a test key in the dashboard on any of the paid plans, and try it on one or two of your URLs. Fifteen minutes of reading and you'll see which part of your workflow this fits into.