Network Protocol Stack

Internet speed testing relies on multiple layers of network protocols working together. Understanding this stack helps explain how measurements are taken and what they represent.

Application Layer (HTTP/2, WebSocket) Handles high-level protocols for web communication and data transfer
Transport Layer (TCP/UDP) Manages reliable data delivery, flow control, and congestion control
Network Layer (IP) Routes data packets across networks using IP addresses
Data Link Layer (Ethernet, Wi-Fi) Handles physical transmission and error detection on local networks
Physical Layer (Cables, Radio Waves) Converts digital data into physical signals for transmission

Each layer adds overhead and capabilities that affect the final speed measurements. Speed tests primarily measure performance at the transport and network layers.

TCP Optimization

TCP (Transmission Control Protocol) is the backbone of internet data transfer. Understanding TCP mechanics is crucial for interpreting speed test results.

TCP Connection Establishment

Every TCP connection begins with a three-way handshake:

  1. SYN: Client sends synchronization request
  2. SYN-ACK: Server acknowledges and sends its own synchronization
  3. ACK: Client acknowledges server's synchronization

Congestion Window

TCP uses a congestion window to control how much data can be sent before receiving acknowledgments. This window size directly affects achievable speeds.

// Simplified TCP congestion window growth
initial_window = 1 MSS (Maximum Segment Size)
for each successful ACK:
    if (slow_start_phase):
        window = window * 2  // Exponential growth
    else:
        window = window + 1  // Linear growth (congestion avoidance)

TCP Fast Open

Modern TCP implementations use TCP Fast Open (TFO) to eliminate the handshake delay for subsequent connections to the same server.

Performance Metrics

Beyond basic speed measurements, several key metrics provide a complete picture of network performance.

Bandwidth

Maximum Capacity
Bits per second (bps)

The theoretical maximum data transfer rate of a connection.

Throughput

Actual Performance
Bits per second (bps)

The actual data transfer rate achieved under real conditions.

Latency

Round-trip Time
Milliseconds (ms)

Time for a packet to travel to server and back.

Jitter

Latency Variation
Milliseconds (ms)

Variation in latency over time, affecting real-time applications.

Packet Loss

Lost Packets
Percentage (%)

Percentage of packets that fail to reach their destination.

Buffer Bloat

Latency Increase
Milliseconds (ms)

Increase in latency due to router buffer management issues.

Congestion Control

Congestion control algorithms prevent network collapse by adapting to available bandwidth and managing data flow.

TCP Congestion Control Algorithms

  • Reno: Classic algorithm with slow start and congestion avoidance
  • Cubic: Default in Linux, optimized for high-speed networks
  • BBR: Google-developed algorithm focusing on bottleneck bandwidth and latency
  • BBRv2: Improved version with better fairness and convergence

Active Queue Management

Modern routers use AQM (Active Queue Management) to prevent buffer bloat:

  • CoDel: Controlled Delay algorithm that manages queue length
  • FQ-CoDel: Flow Queue CoDel for fair bandwidth sharing
  • CAKE: Common Applications Kept Enhanced with advanced features

Why Congestion Control Matters

Without proper congestion control, networks would collapse under high load. Speed tests must work within these constraints to provide realistic measurements.

Web APIs & Web Workers

Modern speed tests leverage browser APIs and Web Workers for accurate, non-blocking measurements.

Web Workers

Web Workers run JavaScript in background threads, preventing UI blocking during intensive speed tests.

// Creating a Web Worker for speed testing
const worker = new Worker('download-worker.js', { type: 'module' });

// Send test parameters
worker.postMessage({
    url: 'https://speed-test-backend.up.railway.app/api/download',
    duration: 10000,
    threads: 4
});

// Receive progress updates
worker.onmessage = (event) => {
    const { type, data } = event.data;
    if (type === 'progress') {
        updateProgress(data.speed);
    }
};

Fetch API & HTTP/2

The Fetch API provides modern HTTP capabilities with support for:

  • HTTP/2 multiplexing for parallel requests
  • Server push for optimized loading
  • Request/response streaming
  • Advanced caching controls

Performance APIs

Browser Performance APIs provide detailed timing information:

  • Performance.now(): High-resolution timestamps
  • Resource Timing API: Detailed request metrics
  • Navigation Timing API: Page load performance

Measurement Theory

Accurate speed measurement requires understanding statistical principles and potential sources of error.

Statistical Considerations

Speed measurements follow statistical distributions. Multiple samples provide more reliable results:

  • Central Tendency: Mean, median, and mode calculations
  • Variance Analysis: Understanding measurement spread
  • Confidence Intervals: Range of likely true values
  • Outlier Detection: Filtering anomalous measurements

Sources of Measurement Error

Common Error Sources

  • Server Limitations: Test server capacity constraints
  • Network Asymmetry: Different upload/download capacities
  • Cross-traffic: Other network activity affecting measurements
  • Protocol Overhead: TCP/IP headers and control packets
  • Measurement Timing: Clock synchronization issues
  • Compression Effects: Data compression altering transfer rates

Calibration & Validation

Professional speed test services regularly calibrate their systems against known standards and reference measurements to ensure accuracy.

Next Steps

Ready to put this knowledge into practice?