Methodology

A transparent, technical account of how EarthquakeTracker gathers, processes, and publishes earthquake data. We believe users should be able to verify every number we show against its primary source, and every editorial claim against the reasoning behind it.

1. What this site does, and does not do

EarthquakeTracker.org is an independent, informational website that presents U.S. Geological Survey (USGS) earthquake data in formats optimized for general-public understanding. We do not operate seismographs, we do not compute magnitudes, we do not relocate hypocenters, and we do not issue alerts, warnings, or advisories of any kind. The scientific measurements we display — magnitude, depth, coordinates, intensity, felt-report counts — are fetched verbatim from the USGS Earthquake Hazards Program public API and rendered in our layouts.

What we add on top of the raw feed is a navigational and editorial layer: regional filters, location pages for every U.S. state and 100+ countries, plain-language event narratives derived from the USGS fields, aggregate statistics computed on the fetched data, and educational context explaining what the numbers mean. If you want the raw, unmediated scientific record, the USGS source link is present on every event page on this site.

2. Data sources

All earthquake data on this site originates from the U.S. Geological Survey (USGS) Earthquake Hazards Program, with the following specific endpoints:

  • FDSN Event Web Serviceearthquake.usgs.gov/fdsnws/event/1/. The canonical programmatic interface for querying USGS earthquake records with parameters such as time range, magnitude threshold, geographic bounding, and distance from a center point.
  • GeoJSON summary feedsearthquake.usgs.gov/earthquakes/feed/v1.0/summary/. Pre-aggregated rolling feeds (past hour, day, week, month) used for our live-map and recent-activity views.
  • Event detail pages — linked from every event record we display. We do not cache event detail pages; we link users through to the USGS page for full ShakeMap, DYFI, PAGER, and moment-tensor data.

The USGS feed aggregates contributions from the Advanced National Seismic System (ANSS), regional networks including the Alaska Earthquake Center and the Hawaiian Volcano Observatory, the National Earthquake Information Center in Golden, Colorado, and international partners such as the European-Mediterranean Seismological Centre, GFZ Potsdam, and Japan's Meteorological Agency. We do not query these upstream networks directly; the USGS feed is the consolidated, authoritative source.

We do not scrape any site, and we do not use third-party aggregators.Every data point is pulled from an official public USGS API with attribution and rate limits respected.

3. Ingestion pipeline and update cadence

A scheduled cron job runs every 5 minutes and issues a small set of parameterized requests against the USGS feed: the rolling 24-hour M2.5+ window, the rolling 30-day M2.5+ window, and the USGS-significant feed. The response payload is normalized into our internal Earthquake record type, persisted as a single snapshot object, and indexed by a cache tag.

Each public page — home, the live tracker hub, state pages, city pages, country pages, individual event pages — reads from this snapshot rather than hitting the USGS API on every request. This serves three goals: respecting USGS rate limits, guaranteeing consistent counts across pages rendered in the same window, and keeping page-load latency low.

Pages use incremental static regeneration (ISR) on a 5-minute interval. In practical terms, a count or event listed on a state page is never more than five minutes behind the underlying USGS feed, and the feed itself typically updates within one minute of an earthquake occurring. End-to-end lag between a seismic event and its appearance on this site is therefore approximately 1–6 minutes.

Event detail pages (at URLs like /earthquakes/event/[id]/) fetch their content on-demand when first requested, are then cached, and revalidate on the same 5-minute cadence. Individual earthquakes are long-lived in the USGS catalog, but their magnitudes and solutions can be refined by seismologists in the hours and days after the event; the ISR cadence lets revised solutions appear within minutes of USGS republishing them.

4. What we compute on top of the raw feed

We compute the following quantities from the fetched records. None of these override USGS scientific values; they are secondary summaries derived by standard arithmetic or rule-based classification:

  • Counts: by time window (24 h, 7 d, 30 d, year-to-date) and by magnitude band (M2.5–3.9, M4.0–4.9, M5.0+).
  • Regional aggregates: for every state, city, and country page, we filter the snapshot to events within a radius of the region's reference coordinates (typical radius: 200–400 km depending on region size) using the haversine formula.
  • Depth classification: shallow (< 70 km), intermediate (70–300 km), deep-focus (> 300 km) per standard seismological convention.
  • Aftershock sequence detection: an event is tagged as a potential mainshock if it has at least three smaller events within 20 km and 72 hours after its origin time. This is a simple spatio-temporal window, not a statistically modelled declustering, and is intended as a navigational cue, not a formal declaration.
  • Event narratives: each event's prose description is generated by a rule-based template that selects sentence variants based on the event's own fields (magnitude, depth, alert level, felt-report count, offshore/onshore, sequence context). The narrative contains no invented facts; every datum inside it traces to a USGS field or to a local derivation described above. See our editorial standards for the full disclosure on this automated prose layer.

5. Quality checks and staleness handling

A healthy snapshot is one fetched successfully from USGS within the last 30 minutes. We expose a public health endpoint at /api/health that reports the snapshot status (ok, stale, or failing), the age of the latest successful USGS fetch in seconds, and the key counts.

If the last successful fetch is older than 30 minutes, a visible amber banner appears at the top of every page informing the reader the data may be stale. If the snapshot is unavailable entirely, a red banner is shown and we degrade gracefully — counts and lists may be absent while the reference content (educational pages, faults, history) remains reachable. This is a deliberate fail-visible design: a silent failure that served outdated numbers would be worse than an acknowledged outage.

We do not modify or interpolate missing fields. If USGS has not yet computed a moment magnitude for an event, our tracker displays the initial magnitude type (typically Ml or Mb) and the "Auto" status flag. Once USGS publishes a reviewed solution, the ISR revalidation picks up the new values and the page is regenerated.

6. Reviewer process

Scientific data on this site is not reviewed by us — it is rendered verbatim from USGS. USGS solutions themselves go through their internal review process, and earthquakes marked "Reviewed" on our site have been signed off by a USGS seismologist; events marked "Auto" are the automatic initial solution and may still be refined.

Editorial and educational content (articles in the Learn section, fault profiles in the Faults section, historical event entries under History, preparedness guides under Prepare) is written and reviewed by the EarthquakeTracker editorial team. Every factual claim in these pages is checked against at least one authoritative source (USGS, IRIS, academic literature, FEMA, Red Cross) before publication. Our full sourcing and fact-checking commitments are published on the editorial standards page.

7. Known limitations

We publish limitations openly because users deserve to know the edges of what this site can and cannot do:

  • Detection threshold: the global catalog is considered essentially complete at magnitude 2.5 in well-instrumented regions (continental U.S., Japan, western Europe), but many smaller events in remote regions or deep ocean basins are missed simply because no station recorded them. A "0 earthquakes in the past 24 hours" result for a remote region should be read as "none detected by the monitored network", not "none occurred."
  • Magnitude revisions: the first automatic magnitude of a large earthquake can shift by 0.2–0.5 units in the first 30 minutes as the W-phase moment tensor is computed. We display whatever USGS currently reports, with a status flag. Headline numbers on news sites that don't refresh may drift from ours and from USGS.
  • Regional summaries: our state, city, and country pages use a radius-based filter around reference coordinates. Events exactly on the boundary of the radius may appear on adjacent pages; we accept this overlap over the alternative of a strict political-boundary polygon test, which would require per-event geocoding and has its own error modes. The radius is tuned per region to balance recall and specificity.
  • Aftershock detection: our 20 km / 72 h window is a heuristic, not a statistical declustering. It will flag genuine aftershock sequences but can misclassify coincidental nearby activity or miss widely distributed sequences.
  • Time zones: all displayed times in listings and charts are UTC unless explicitly labeled with a local time zone. The "today" concept on the /earthquakes/today/ page uses a rolling 24-hour window from the current moment, not a calendar day.
  • No early warning: this site is not an earthquake early-warning system and must not be used as one. Systems such as USGS ShakeAlert operate on sub-second timescales; we operate on 5-minute ISR. For genuine warnings, use official channels.

8. Methodology changelog

Substantive methodology changes are logged here. Cosmetic or purely structural updates to the site are not logged.

  • April 2026 — Introduced 5-minute scheduled snapshot refresh with tag-based invalidation. Added amber/red staleness banners and public /api/health endpoint. Expanded per-event narratives with priority-sorted sentence templates.

Related transparency pages