What Is Synthetic Monitoring, and Why Every Infrastructure Team Needs It

Synthetic Monitoring Series — Part 1 of 8

What Is Synthetic Monitoring — and Why Every Infrastructure Team Needs It

Before a customer opens a support ticket. Before your on-call phone rings at 3 AM. Before the social posts start — something went wrong. And it probably started long before anyone noticed.

That gap between “problem started” and “problem detected” is where synthetic monitoring lives. This post introduces the concept, traces its history, and explains why it’s become foundational to modern infrastructure observability.

Start Here: What the Videos Cover

The two videos below — each under three minutes — walk through what synthetic monitoring is and how we got here.

▶  Video 1 — “What Is Synthetic Monitoring?”
▶  Video 2 — “A Brief History of Network Monitoring”

The Core Idea: Stop Waiting for Someone to Tell You

Most monitoring is reactive. Real User Monitoring — RUM — watches actual people browse your application and records what they experience. It’s valuable, but it has a fundamental limitation: it requires users to be present. No traffic, no data. No complaints, no visibility.

Synthetic monitoring inverts that model. Instead of waiting for real users, it sends automated probes on a schedule — every 30 seconds, every minute — from controlled locations around the world. The probes simulate a user interaction and record the result: latency, packet loss, response codes, TLS handshake duration, DNS resolution time. If something breaks, the probe catches it — whether or not anyone is online.


The only thing “synthetic” about it is the user. The measurements are completely real. Round-trip time is round-trip time. A failed TLS handshake is a failed TLS handshake. Synthetic monitoring doesn’t simulate problems — it simulates the traffic and measures actual infrastructure behavior.

The Core Loop

Every synthetic check follows the same simple sequence:

Step

What Happens

1. Send

A probe sends a request from a known, controlled location

2. Measure

It records the response — latency, status, timing breakdown

3. Record

Results are stored and compared against baselines

4. Repeat

The cycle runs on a fixed schedule, 24 hours a day

If latency crosses a threshold, a connection fails, or a certificate is about to expire, an alert fires immediately — not after a user calls in.

This is what makes synthetic monitoring so powerful for SLA compliance. If your SLA promises 99.99% uptime, you have about 52 seconds of allowed downtime per day. You cannot afford to wait for a complaint.

Where This All Came From: Four Decades in Three Tracks

Modern synthetic monitoring didn’t appear fully formed. It evolved over 40 years, across three separate disciplines that rarely talked to each other.

Track 1: SNMP (1988)

RFC 1067 gave engineers the first formal way to query network devices: is this router alive, how much traffic is it handling, is it overloaded? SNMP gave us device-level visibility. It told you what your equipment was doing — but nothing about what your users were experiencing.

Track 2: NetFlow (mid-1990s)

Cisco built NetFlow as a switching optimization, but network engineers quickly realized the flow records were valuable for traffic analysis. Suddenly you could answer: where is traffic going, who’s talking to whom, which applications are using the most bandwidth? NetFlow told you about traffic patterns — but still nothing about whether the service at the other end was actually working.

Track 3: Synthetic (1983–1995)

Engineers have run ping since 1983 — the oldest form of synthetic testing, written by Mike Muuss to debug a network problem. He named it after sonar. Commercial synthetic monitoring didn’t arrive until 1995 with Keynote Systems. These platforms asked a fundamentally different question: not “is the device healthy?” or “where is the traffic going?” — but “what is the user actually experiencing?”


Three tracks. Four decades. Each one essential, each one incomplete on its own. What the industry learned — and what modern observability platforms are now designed around — is that these tracks need to converge. Device health, traffic patterns, and user experience are not separate problems. They’re three angles on the same system.

What Synthetic Monitoring Actually Measures

A single synthetic check can decompose a web request into five distinct phases, each one pointing to a different part of your infrastructure:

Phase

What It Measures

What a Problem Indicates

DNS Resolution

Hostname-to-IP translation time

Resolver misconfiguration or failure

TCP Connection

Three-way handshake latency

Network distance or congestion

TLS Handshake

Certificate exchange and encryption setup

Cert chain issues or missing TLS 1.3

TTFB

Server processing time

Application bottleneck

Content Download

Response transfer speed

Bandwidth constraint or oversized payload

If TTFB is slow while DNS, TCP, and TLS are all fast, the problem is in your application — not your network. If DNS is disproportionately long, you have a resolver issue. One check. Five answers.

The 3 AM Test

Here’s the simplest way to understand the value of synthetic monitoring:

It’s 3 AM on a Sunday. No users are online. Your passive monitoring sees nothing — because there’s nothing to see.

A synthetic probe sends its scheduled check. The API returns a 500 error. An alert fires. By 3:05 AM, your on-call engineer is investigating — hours before the first customer would have noticed. By the time business hours begin, the issue is resolved and the incident is documented.

That’s active measurement. You’re not waiting for the network to report a problem. You’re asking it — on a schedule, from controlled locations, continuously.

Why This Series Exists

Parlon is built around the philosophy that observability should be active and continuous — not passive and reactive. Synthetic monitoring is the technical foundation of that approach.

This series exists because we believe the infrastructure and operations community deserves a clear, technically rigorous explanation of how these systems work — from first principles through to production-grade deployment. Not a sales pitch. An education.

Over the coming posts, we’ll cover the protocols that make synthetic monitoring possible (ICMP, DNS, TCP, TLS, HTTP, WebSocket, and more), how probes are architected for resilience, what distributed testing actually reveals, and how machine learning is changing alert quality.


Next in the Series

Part 2 — The Foundation: ICMP, DNS, and TCP. The three protocols every synthetic monitoring strategy starts with.

Synthetic MonitoringNetwork ObservabilityInfrastructure MonitoringSRENetwork OperationsIT OperationsSNMPNetFlowDNSHTTP Monitoring


About Parlon
Parlon is an infrastructure observability platform built for enterprise teams operating complex, hybrid environments. Parlon combines active synthetic validation, real-time telemetry normalization, and learning-based alerting into a single platform — shifting operations from firefighting to foresight. Learn more at parlon.io.