Skip to main content

M-Lab Testing Platform

M-Lab provides one of the largest collection of open Internet performance data. As a consortium of research, industry, and public interest partners, M-Lab is dedicated to providing an ecosystem for the open, verifiable measurement of global network performance. All of the data collected by M-Lab's global measurement platform is made openly available, and all of the measurement tools hosted by M-Lab are open source.

The .CA Internet Performance Test (IPT) uses vendor-neutral test servers located throughout Canada at various Internet Exchange Points (IXPs). Server nodes in the Canadian Internet Exchange Points located in Toronto, Montreal and Calgary are running the M-LAB platform, which offers a number of different tests to measure network speed and latency, blocking and throttling.  The IPT utilizes the Network Diagnostic Test (NDT) to provide the speed and diagnostic information regarding your configuration and network infrastructure. 

Using the Data

As each user performs a test, their data is anonymously collected and aggregated into a large dataset that spans Canada. Researchers will be able to understand the capabilities of Canada’s Internet infrastructure. As the reporting infrastructure grows, we will be able to overlay demographic and social data to help understand who is getting the best benefits from this great technology.  All tests performed using the M-Lab NDT platform have the data stored within the M-Lab database and available through Google’s BigQuery Data.

Once a sufficient amount of Internet Performance Test data has been collected, CIRA will provide an easy to use way to access this data. In the meantime you can access all the data M-Lab collects (including IPT data) directly via:

The Network Diagnostic Test (NDT) Results

M-Lab’s Network Diagnostic Test (NDT) connects your computer to one of our servers within Canadian IXPs to provide network configuration and performance testing.  It communicates with a server to perform diagnostic functions and then displays the results to the test users.  For additional details on the NDT test itself, refer to the links below:

Detailed Results

The particular values are separated with commas without any spaces. The following results are stored: 

Variable Description
c2sRate Measured throughput speed from client to server (value in kb/s).
ClientIP The IP address assigned to the client that conducted the measurement.
ClientPort The port used by the client to conduct the measurement.
ClientReportedMbps The download rate as calculated by the client, in megabits per second, or Mbit/s. Not all clients report this value.
ClientToServerSpeed Measured throughput speed from client to server (value in mb/s).
CountRTT The number of round trip time samples included in S2C.SumRTT, reported in milliseconds.
CurMSS The current maximum segment size (MSS), in octets.
EndTime The date and time when the measurement ended in UTC.
Error Any error message(s) recorded during a measurement.
MaxRTT The maximum sampled round trip time, recorded in milliseconds.
MeanThroughputMbps The measured rate as calculated by the server. Presented in megabits per second, or Mbit/s, this value is the average of tcp-info snapshots taken at the beginning and end of an ndt5 measurement. Therefore it is identified as “MeanThroughputMbps”.
MinRTT The minimum RTT observed during the download measurement, recorded in milliseconds.
PacketLoss Percentage of packets that had to be resent due to transmission error
PktsOut The total number of segments sent.
s2cRate Measured throughput speed from server to client (value in kb/s).
ServerIP The IP address assigned to the M-Lab server that conducted the measurement.
ServerPort The port used by the server to conduct the measurement.
ServerToClientSpeed Measured throughput speed from server to client (value in mb/s).
StartTime The date and time when the measurement began in UTC.
SumRTT The sum of all sampled round trip times, recorded in milliseconds.
TCPInfo The TCPInfo record provides results from the TCP_INFO netlink socket. These are the same values returned to clients at the end of the download (S2C) measurement.
TCPInfo.AdvMSS Advertised MSS.
TCPInfo.AppLimited Flag indicating that rate measurements reflect non-network bottlenecks. Note that even very short application stalls invalidate max_BW measurements.
TCPInfo.ATO Delayed ACK Timeout. Quantized to system jiffies.
TCPInfo.Backoff Exponential timeout backoff counter. Increment on RTO, reset on successful RTT measurements.
TCPInfo.BusyTime Time with outstanding (unacknowledged) data. Time when snd.una is not equal to
TCPInfo.BytesAcked The number of data bytes for which cumulative acknowledgments have been received.
TCPInfo.BytesReceived The number of data bytes for which have been received.
TCPInfo.BytesRetrans Bytes retransmitted. May include headers and new data carried with a retransmission (for thin flows).
TCPInfo.BytesSent Payload bytes sent (excludes headers, includes retransmissions).
TCPInfo.CAState Loss recovery state machine. For traditional loss based congestion control algorithms, CAState is also used to control window adjustments.
TCPInfo.DataSegsIn Input segments carrying data (len>0).
TCPInfo.DataSegsOut Transmitted segments carrying data (len>0).
TCPInfo.Delivered Data segments delivered to the receiver including retransmits. As reported by returning ACKs, used by ECN.
TCPInfo.DeliveredCE ECE marked data segments delivered to the receiver including retransmits. As reported by returning ACKs, used by ECN.
TCPInfo.DeliveryRate Observed Maximum Delivery Rate.
TCPInfo.DSackDups Duplicate segments reported by DSACK. Not reported by some Operating Systems.
TCPInfo.LastAckSent Time since last ACK was sent (not implemented). Present in TCP_INFO but not elsewhere in the kernel.
TCPInfo.LastDataRecv Time since last data segment was received. Quantized to jiffies.
TCPInfo.LastDataSent Time since last data segment was sent. Quantized to jiffies.
TCPInfo.Lost Scoreboard segments marked lost by loss detection heuristics. Accounting for the Pipe algorithm.
TCPInfo.MaxPacingRate Settable pacing rate clamp. Set with setsockopt( ..SO_MAX_PACING_RATE.. ).
TCPInfo.MinRTT Minimum Round Trip Time. From an older, pre-BBR algorithm.
TCPInfo.NotsentBytes Number of bytes queued in the send buffer that have not been sent.
TCPInfo.Options Bit encoded SYN options and other negotiations TIMESTAMPS 0x1; SACK 0x2; WSCALE 0x4; ECN 0x8 - Was negotiated; ECN_SEEN - At least one ECT seen; SYN_DATA - SYN-ACK acknowledged data in SYN sent or rcvd.
TCPInfo.PacingRate Current Pacing Rate, nominally updated by congestion control.
TCPInfo.PMTU Maximum IP Transmission Unit for this path.
TCPInfo.Probes Consecutive zero window probes that have gone unanswered.
TCPInfo.RcvMSS Maximum observed segment size from the remote host. Used to trigger delayed ACKs.
TCPInfo.RcvRTT Receiver Side RTT estimate.
TCPInfo.RcvSpace Space reserved for the receive queue. Typically updated by receiver side auto-tuning.
TCPInfo.RcvSsThresh Current Window Clamp. Receiver algorithm to avoid allocating excessive receive buffers.
TCPInfo.Reordering Maximum observed reordering distance.
TCPInfo.ReordSeen Received ACKs that were out of order. Estimates reordering on the return path.
TCPInfo.Retrans Scoreboard segments marked retransmitted. Accounting for the Pipe algorithm.
TCPInfo.Retransmits Number of timeouts (RTO based retransmissions) at this sequence. Reset to zero on forward progress.
TCPInfo.RTO Retransmission Timeout. Quantized to system jiffies.
TCPInfo.RTT Smoothed Round Trip Time (RTT). The Linux implementation differs from the standard.
TCPInfo.RTTVar RTT variance. The Linux implementation differs from the standard.
TCPInfo.RWndLimited Time spend waiting for receiver window.
TCPInfo.Sacked Scoreboard segment marked SACKED by sack blocks. Accounting for the Pipe algorithm.
TCPInfo.SegsIn The number of segments received. Includes data and pure ACKs.
TCPInfo.SegsOut The number of segments transmitted. Includes data and pure ACKs.
TCPInfo.SndBufLimited Time spent waiting for sender buffer space. This only includes the time when TCP transmissions are starved for data, but the application has been stopped because the buffer is full and can not be grown for some reason.
TCPInfo.SndCwnd Congestion Window. Value controlled by the selected congestion control algorithm.
TCPInfo.SndMSS Current Maximum Segment Size. Note that this can be smaller than the negotiated MSS for various reasons.
TCPInfo.SndSsThresh Slow Start Threshold. Value controlled by the selected congestion control algorithm.
TCPInfo.State TCP state is nominally 1 (Established). Other values reflect transient states having incomplete rows.
TCPInfo.TotalRetrans Total number of segments containing retransmitted data.
TCPInfo.Unacked Number of segments between snd.nxt and snd.una. Accounting for the Pipe algorithm.
TCPInfo.WScale BUG Conflation of SndWScale and RcvWScale. See