Last year, I led a project to build a real-time monitoring dashboard for a trading platform. The requirement: visualize market data updating at 1 million events per second with sub-100ms latency to user screens.
It broke everything we thought we knew about data visualization.
The Reality of Real-Time
Here's what nobody tells you: "real-time" usually isn't real-time. And that's often fine.
Most dashboards labeled "real-time" actually refresh every 5-30 seconds. For most use cases, that's perfectly adequate. But when you actually need sub-second updates, the rules change completely.
The Three Tiers of "Real-Time"
Tier 1: Near-Real-Time (5-60 second refresh)
Use cases: Business dashboards, marketing analytics, sales metrics
Architecture: Polling API endpoints, batch aggregation
Complexity: Moderate
This is what most people need. The data team aggregates data every minute, the dashboard polls for updates. Simple and effective.
Tier 2: Real-Time (1-5 second refresh)
Use cases: Operations monitoring, live event tracking, customer support queues
Architecture: WebSockets, server-sent events, streaming queries
Complexity: High
The transition from polling to push changes everything. Now you're maintaining persistent connections, handling reconnection logic, and managing state across client/server.
Tier 3: Sub-Second (under 1 second)
Use cases: Trading platforms, live gaming stats, industrial monitoring
Architecture: Streaming pipelines, specialized databases, optimized rendering
Complexity: Very high
This is where we lived for 8 months. Every optimization matters. Every millisecond counts.
Why Real-Time Visualization is Hard
Problem 1: Data Volume
At 1M events/second, you can't render each event. That's one million points every second. The browser would explode.
Solution: Pre-aggregation. Don't send raw events to the frontend. Aggregate at the source—averages, counts, percentiles per time bucket. We sent 10 aggregated updates per second instead of 1,000,000 raw events.
Problem 2: Render Performance
Even with 10 updates per second, re-rendering entire charts kills performance. React's reconciliation, SVG manipulation, canvas redraws—they add up.
Solution: Incremental updates. Don't rebuild the chart; append to it. We used WebGL-based rendering for the highest-frequency charts, which can handle 60fps updates smoothly.
Problem 3: Human Perception
Here's the counterintuitive finding: updates faster than ~200ms become a blur. Users can't process information at 10+ fps. They just see flickering.
Solution: Visual smoothing. Even when data updates at 10Hz, we animated transitions over 200ms. The chart felt "live" without feeling chaotic.
Problem 4: Network Variability
WebSocket connections drop. Packets get delayed. Mobile users switch networks.
Solution: Robust reconnection, message queuing, and graceful degradation. If the connection drops, show the last known state with a "reconnecting" indicator—don't show a blank screen.
Architecture That Worked
Here's the stack that handled our million-event-per-second requirement:
Data Layer
- Apache Kafka for event ingestion
- Apache Flink for real-time aggregation
- Redis for latest-state caching
- TimescaleDB for historical queries
API Layer
- Go for WebSocket server (handles concurrent connections efficiently)
- gRPC for internal service communication
- Message batching (send updates every 100ms, not every event)
Frontend
- React for UI structure
- WebGL (via regl) for high-frequency charts
- Lightweight canvas for medium-frequency charts
- SVG (via D3) only for low-frequency, interaction-heavy charts
Key Decisions
- Aggregate as early as possible: The frontend should receive ready-to-display data, not raw events.
- Separate update frequencies: Not every element needs 10fps. Static context can update every 30 seconds.
- User-controlled detail level: Let users choose between "overview" (slower updates, more data) and "detailed" (faster updates, focused view).
Performance Optimizations
On the Server
- Pre-compute time buckets: Don't make the client calculate "last 5 minutes"
- Delta encoding: Send only what changed, not the full state
- Compression: gzip WebSocket messages (surprisingly effective)
- Connection pooling: Reuse connections across subscriptions
On the Client
- Object pooling: Reuse chart elements instead of garbage collecting
- RequestAnimationFrame batching: Sync updates with browser render cycle
- Canvas layering: Static elements on one canvas, dynamic on another
- Web Workers: Parse incoming data off the main thread
What Didn't Work
- SVG for high-frequency updates (DOM manipulation too slow)
- Redux for real-time state (too much overhead for frequent updates)
- Off-the-shelf charting libraries for >5fps (not optimized for this use case)
UX Lessons from Real-Time
Lesson 1: Give Users Control
Not everyone wants live updates. Some users find them distracting. We added:
- Pause button: "Freeze" the current view
- Update frequency selector: 1 second, 5 seconds, 30 seconds
- Historical mode: "Show me what happened 5 minutes ago"
Lesson 2: Make State Obvious
Users need to know:
- Is this live or historical?
- When was the last update?
- Is the connection healthy?
We added a persistent "heartbeat" indicator that pulsed with each update. Surprisingly reassuring.
Lesson 3: Handle the Boring States
Most of the time, nothing interesting happens. The chart just... updates with similar values.
This is where annotations help: "Spike detected at 14:32" draws attention to meaningful changes. Without it, users stare at noise.
Lesson 4: Mobile is Different
Smaller screens, worse connections, battery constraints. For mobile:
- Reduce update frequency automatically
- Simplify visualizations
- Add aggressive reconnection logic
When Real-Time Isn't Worth It
After building this system, I'm more skeptical of real-time requirements. Ask:
- Will faster updates change user behavior?
- Can users actually act on information that fast?
- Is the engineering cost justified?
For most dashboards, the answer is no. A 15-second refresh is fine. Save real-time for cases where seconds actually matter.
Tools and Resources
For Tier 1-2 (near-real-time and real-time):
Modern tools like ChartGen.ai can generate charts that poll APIs effectively. Combined with WebSocket endpoints, you can build solid real-time dashboards without custom infrastructure.
For Tier 3 (sub-second):
You'll need specialized tools: D3 with canvas, custom WebGL rendering, or purpose-built libraries like uPlot or Apache ECharts with incremental update mode.
For streaming infrastructure:
- Apache Kafka + Flink (complex but powerful)
- AWS Kinesis + Lambda (managed but limited)
- Redis Streams + custom aggregation (simpler but less scalable)
Monitoring Your Real-Time System
What we measured:
- End-to-end latency (event timestamp to pixel on screen)
- Connection health (reconnections per hour)
- Render performance (frames per second)
- User engagement (do people actually watch the real-time view?)
Surprising finding: Many users opened the dashboard, watched for 2 minutes, then left it in a background tab. The "live" data was mostly unseen.
Final Thought
Real-time visualization is an engineering challenge, but it's also a UX challenge. The hardest part isn't getting data to the screen fast—it's presenting it in a way that humans can actually understand and act on.
Before building real-time, ask: what will users actually do differently with faster data?
If the answer isn't clear, you might not need real-time at all.

