--:--:-- UTC · 30+ LIVE
MODULE 01 // SEISMOLOGY // SIMULATION ENGINEERING

How 3D Earthquake
Simulations
Work

Every dot on a live earthquake globe represents a real seismic rupture, rendered in real time from a USGS data feed through a WebGL pipeline. Learn the full stack — from GeoJSON API to Three.js sphere geometry to depth colour-coding — and what design decisions make live seismic data legible at planetary scale.

SOURCE USGS · THREE.JS · WEBGL
UPDATED MARCH 2026
READ TIME ~11 MIN
🌐 OPEN 3D TRANSPARENT GLOBE
SCROLL
← BACK TO LEARN
// MODULE 01 — SEISMOLOGY — ALL ARTICLES
60fps
TARGET WEBGL RENDER RATE
~500k
QUAKES IN USGS 30-DAY CATALOG
3 min
USGS API LATENCY AFTER DETECTION
30+
LIVE SIMULATIONS ON PANDITA DATA
USGS REAL-TIME FEED — CURRENTLY RENDERING GLOBALLY
LOADING...

A live 3D earthquake simulation is not a video or a pre-rendered animation. It is a data pipeline that runs continuously: fetching JSON from a seismic agency API, converting geographic coordinates to 3D Cartesian positions on a sphere, scaling sphere geometry by seismic moment, colouring by depth, and drawing the result 60 times per second in the browser using the GPU. Every dot you see represents a real event that happened — its position, size, and colour each encoding a distinct physical property. This article walks the full stack, from the USGS feed to the pixel on your screen.

THE DATA SOURCE: USGS EARTHQUAKE CATALOG API

The foundation of every Pandita Data seismic simulation is the USGS Earthquake Hazards Program GeoJSON feed. The USGS operates a global seismic network and publishes near-real-time earthquake catalogs via a public REST API. Each endpoint returns a GeoJSON FeatureCollection — a standard geographic data format — where every feature is one earthquake event with properties including magnitude, depth, location name, origin time, and a unique event ID. The feeds are updated every minute for significant events and every five minutes for the full catalog.

USGS GEOJSON FEED
// Endpoint structure https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/{magnitude}_{period}.geojson // Examples all_hour.geojson → all events, last 60 minutes 2.5_day.geojson → M2.5+, last 24 hours significant_month.geojson → significant events, 30 days // Each feature contains: { geometry: { coordinates: [longitude, latitude, depth] }, properties: { mag: 5.4, // moment magnitude place: "...", // location string time: 1709823600000, // Unix ms timestamp depth: 35 // km below surface } }

THE RENDERING PIPELINE: FROM COORDINATES TO PIXELS

01
FETCH — GeoJSON API CALL
On load and at a set refresh interval (typically 60 seconds), the simulation fetches the USGS GeoJSON endpoint. The response is parsed into a JavaScript array of earthquake objects. Network latency is typically under 200 ms; the USGS CDN serves cached copies of the feed globally.
▸ FETCH API · JSON.PARSE · 60s POLLING
02
TRANSFORM — GEOGRAPHIC TO CARTESIAN
Earthquake coordinates arrive as [longitude, latitude, depth]. To place them on a 3D sphere in Three.js, these are converted to Cartesian (x, y, z) using spherical coordinate equations. Depth is optionally used to push markers below the surface of the globe for subsurface rendering modes.
▸ SPHERICAL → CARTESIAN · RADIUS OFFSET BY DEPTH
03
SCALE — MAGNITUDE TO SPHERE RADIUS
Because magnitude is logarithmic, a raw linear mapping makes large earthquakes absurdly oversized. Pandita Data applies a cube-root or power-law transform so the visual size difference between M4 and M7 is perceptible but not overwhelming. The exact scaling is tuned empirically so the Ring of Fire is legible without swamping the globe.
▸ r = BASE × Math.pow(10, (mag − threshold) × SCALE_FACTOR)
04
COLOUR — DEPTH ENCODING
Depth is encoded as hue. Shallow events (0–70 km) render in red/orange — the most visually alarming colours, reflecting that shallow events cause the most surface damage. Intermediate depth (70–300 km) maps to yellow/green. Deep focus events (300–700 km) render in blue/cyan, visually receding from the surface. This palette is applied via Three.js MeshBasicMaterial colour interpolation.
▸ 0–70 km: RED · 70–300 km: YELLOW · 300–700 km: BLUE
05
RENDER — THREE.JS WEBGL SCENE
Three.js manages the WebGL context, camera, lighting, and render loop. The globe itself is a high-resolution SphereGeometry with an Earth texture map. Earthquake markers are SphereGeometry instances added to the scene. OrbitControls allow the user to rotate, zoom, and pan. The render loop calls requestAnimationFrame at up to 60fps, rotating the globe and updating any animated elements each frame.
▸ THREE.JS · WEBGL2 · ORBITCONTROLS · RAF LOOP
06
INTERACT — RAYCASTING & TOOLTIPS
When a user clicks or hovers a marker, Three.js Raycaster casts a ray from the camera through the mouse position into the 3D scene. If the ray intersects a marker sphere, the event properties (magnitude, location, depth, time) are retrieved and displayed as an overlay tooltip. This happens in milliseconds without any server round-trip.
▸ RAYCASTER · POINTER EVENTS · ZERO-LATENCY LOOKUP
COORDINATE TRANSFORM
// Geographic (lon, lat) → 3D Cartesian on sphere of radius R function toCartesian(lon, lat, R) { const phi = (90 - lat) * (Math.PI / 180); // polar angle const theta = (lon + 180) * (Math.PI / 180); // azimuthal angle return new THREE.Vector3( -R * Math.sin(phi) * Math.cos(theta), R * Math.cos(phi), R * Math.sin(phi) * Math.sin(theta) ); } // Depth offset: push marker below surface const r = GLOBE_RADIUS - (depthKm / MAX_DEPTH) * DEPTH_SCALE;

EXPLORE THE SIMULATIONS

Pandita Data runs multiple earthquake simulation modes, each emphasising a different dimension of the data. The transparent globe renders the Earth as a semi-opaque shell, making deep-focus earthquakes visible inside the planet — a view that makes the subducting slabs of the Pacific Ring of Fire immediately apparent. The time simulation replays the seismic catalog chronologically, letting you watch sequences unfold. The dashboard distils the raw catalog into charts and statistics.

// 3D TRANSPARENT GLOBE — DEPTH INSIDE THE EARTH
LIVE · WEBGL
// EQ TIME SIMULATION — CHRONOLOGICAL SEQUENCE
INTERACTIVE
// EQ DASHBOARD — LIVE STATS & MAGNITUDE CHARTS
LIVE
🌐
TRANSPARENT GLOBE
→ DEPTH INSIDE EARTH
⏱️
TIME SIMULATION
→ SEQUENCE REPLAY
📊
EQ DASHBOARD
→ LIVE STATISTICS

PERFORMANCE: RENDERING THOUSANDS OF POINTS IN REAL TIME

A 30-day USGS catalog can contain tens of thousands of events. Naively adding one Three.js Mesh per earthquake would create tens of thousands of draw calls per frame — far beyond what any GPU can handle at 60fps. The solution is instanced rendering: Three.js InstancedMesh allows thousands of identical geometries (all the marker spheres) to be submitted to the GPU in a single draw call, with each instance having its own transform matrix and colour. This reduces draw calls from N events to 1, making the difference between a 2fps slideshow and a smooth interactive globe.

INSTANCED RENDERING
// Create one InstancedMesh for all N earthquakes const geometry = new THREE.SphereGeometry(1, 8, 8); const material = new THREE.MeshBasicMaterial(); const mesh = new THREE.InstancedMesh(geometry, material, N); // Set each instance's position + scale in one matrix const matrix = new THREE.Matrix4(); earthquakes.forEach((eq, i) => { const pos = toCartesian(eq.lon, eq.lat, GLOBE_RADIUS); const scale = magnitudeToRadius(eq.mag); matrix.makeScale(scale, scale, scale); matrix.setPosition(pos); mesh.setMatrixAt(i, matrix); mesh.setColorAt(i, depthToColor(eq.depth)); }); mesh.instanceMatrix.needsUpdate = true; scene.add(mesh); // 1 draw call for all N earthquakes

THE BRAIN DASHBOARD: INTELLIGENCE LAYER

Beyond raw seismic rendering, Pandita Data's Brain Dashboard adds an interpretive layer — aggregating live feeds across earthquake, weather, space weather, and geohazard data sources into a unified situational awareness panel. Where the 3D simulations answer "what is happening where", the Brain Dashboard answers "what does it mean right now" — surfacing elevated hazard indicators, regional activity anomalies, and cross-domain correlations. It is designed as the command-centre view for users who need synthesised intelligence rather than raw data.

// BRAIN DASHBOARD — PANDITA DATA INTELLIGENCE CENTRE
LIVE
🧠
PANDITA DATA — GEOHAZARD INTELLIGENCE COMMAND CENTRE
→ OPEN THE BRAIN DASHBOARD

DESIGN DECISIONS: MAKING DATA LEGIBLE AT PLANETARY SCALE

🎨
DEPTH COLOUR PALETTE
Red for shallow events is not arbitrary — it follows established seismological convention (USGS ShakeMap uses the same palette) and aligns with intuitive danger signalling. The full hue sweep from red to blue across the depth range makes subducting slabs — where deep focus earthquakes concentrate — visible as coherent geometric structures in 3D space.
▸ CONVENTION-CONSISTENT · PERCEPTUALLY ORDERED
📐
MAGNITUDE SCALING
Raw logarithmic magnitude would make M9 markers the size of continents. The cube-root compression preserves the visible hierarchy — M7 is clearly bigger than M4 — while keeping all markers on-screen simultaneously. The smallest detectable events (M2) are rendered as single pixels to avoid visual clutter while preserving the count.
▸ CUBE-ROOT TRANSFORM · PERCEPTUAL LINEARITY
PULSE ANIMATIONS
Recent events — detected in the last hour — pulse with a ring animation that expands and fades. This temporal encoding lets users immediately distinguish fresh activity from historical background without reading timestamps. The animation is CSS-based to avoid GPU overdraw from multiple shader passes.
▸ CSS RING ANIMATION · TEMPORAL SALIENCE
SIMULATIONPRIMARY DATA DIM.BEST USE CASE
3D Transparent GlobeDepth (inside Earth)Visualising subducting slabs and deep seismicity structure
EQ Time SimulationTime (sequence)Understanding aftershock sequences and temporal clustering
EQ DashboardStatistics / magnitude dist.Real-time Gutenberg-Richter overview and rate monitoring
Brain DashboardMulti-hazard synthesisSituational awareness across all live hazard feeds
3D Earthquake MapGeography / global overviewReal-time global seismicity at a glance
// LATENCY & CATALOG COMPLETENESS

Real-time 3D simulations are only as current as their data source. The USGS GeoJSON feeds have a typical latency of 1–5 minutes after a seismic network detection. Immediately after a large earthquake, small aftershocks are initially under-detected due to seismic noise masking — the first hours of any sequence will show fewer events than actually occurred. The simulations display what the USGS catalog currently contains, not every event that has physically happened.

FROM SIMULATION TO INTELLIGENCE REPORT

Interacting with a 3D simulation gives you situational awareness — you can see the Ring of Fire, identify active regions, watch sequences evolve. The Pandita Data Disaster Intelligence Report takes the next step: it runs a structured analysis of seismicity, hazard exposure, and risk context for any selected region, generating a formatted report combining live USGS data with geological context. Where simulations visualise, reports contextualise.

📋
PANDITA DATA — GEOHAZARD INTELLIGENCE REPORTS
→ GENERATE A REAL-TIME DISASTER INTELLIGENCE REPORT

RELATED GUIDES

← ALL ARTICLES
🌐 3D GLOBE 📋 RISK REPORTS