Skip to main content

General

Performance Benchmarks

How to interpret Proxus single-gateway pipeline benchmark results generated with simulated field input.

This page explains what the Proxus performance benchmark measures, what it does not cover, and how to use the results in sizing discussions.

Overview

The results on this page describe single-gateway pipeline throughput under simulated field input.

They are intended to show how much traffic the Proxus collection-to-storage pipeline can process under controlled benchmark conditions. They do not represent a benchmark of real PLCs, CNCs, sensors, field networks, or protocol round-trip timing.

Across the verified profiles in this document, observed throughput ranged from roughly 0.55 billion to 29.8 billion processed operations per day.

The arithmetic mean across the verified profiles was roughly 8.9 billion processed operations per day.

The highest observed profile used:

  • 16 devices
  • 1,000 tags per device
  • 50ms source polling
  • shared storage sink
  • a 20s measurement window after warmup

Under that profile, the observed result was roughly 27.19 billion processed operations per day. The lowest verified profile in the same set was roughly 0.55 billion operations per day.

warning
Benchmark Scope

This benchmark measures Proxus pipeline throughput under simulated input. It is not a hardware benchmark, and it should not be interpreted as an unconditional SLA.


Benchmark Snapshots

Benchmark snapshot summary
Benchmark snapshot summary

Benchmark terminal snapshot
Benchmark terminal snapshot


What This Benchmark Measures

The benchmark focused on the end-to-end path from simulated field input through the Proxus gateway pipeline:

  • simulated data collection at the gateway
  • processing and routing inside the platform
  • delivery into storage

This makes the result more useful than an isolated database insert test, because it includes ingestion, scheduling, processing, routing, and storage delivery overhead inside the product.


What This Benchmark Does Not Measure

This benchmark does not measure:

  • real device read latency
  • PLC, CNC, RTU, or sensor response times
  • protocol stack behavior on physical networks
  • wiring, fieldbus congestion, or industrial switch latency
  • vendor-specific device performance limits

If you need to understand hardware behavior, network latency, or protocol round-trip timing, run a workload-specific validation against the actual devices on the actual network.


Simulation Method

For these runs, field input was generated by the platform simulator path rather than by physical devices.

As a result, the benchmark is useful for answering:

  • how much simulated device/tag traffic one gateway can process
  • how the platform behaves as polling cadence and workload shape change
  • what throughput the internal collection-to-storage pipeline can sustain

It does not answer how quickly a specific PLC family, CNC controller, or sensor fleet can be read in production.


Test Environment

The reported runs shared the following conditions:

  • single gateway
  • simulated field input
  • shared storage sink
  • 20s measurement window after warmup
  • Apple M4 Max test machine
  • 14 CPU cores
  • 36 GB RAM
  • macOS 26.2

Two benchmark families were used:

  • Baseline profile: production-like 1000ms polling
  • High-frequency profile: more aggressive polling such as 100ms, 50ms, or 25ms

How To Interpret The Result

Use the result as a pipeline sizing reference, not as a final deployment commitment.

It is most useful for answering:

  • what order of magnitude a single gateway can handle under simulated field input
  • whether one gateway is likely to be sufficient for an initial rollout
  • when a deployment should move from single-node sizing to scale-out planning

It should not be used to claim that every workload will reach the same daily volume, and it should not be presented as proof of real-device read performance.

For public messaging, the arithmetic mean is the most balanced summary. For engineering decisions, use the full profile table rather than a single headline number.


Reference Measurements

The following measurements were collected from isolated smoke benchmark runs with simulated input. Daily volume is derived from the observed processed tag rate for each run.

ProfilePollingTags/sOperations/dayPeak CPUPeak RSS
64 devices x 100 tags1000ms6,395552,528,0004.7%140.0 MB
64 devices x 100 tags500ms13,2691,146,441,6005.7%139.9 MB
64 devices x 100 tags250ms25,5542,207,865,60020.5%147.4 MB
64 devices x 100 tags100ms63,5055,486,832,00020.0%148.5 MB
1000 devices x 100 tags1000ms99,8468,626,694,40014.7%148.8 MB
16 devices x 1,000 tags1000ms14,7951,278,288,00018.0%148.8 MB
16 devices x 1,000 tags500ms31,6872,737,756,80026.0%148.9 MB
16 devices x 1,000 tags250ms63,9045,521,305,60021.7%147.9 MB
16 devices x 1,000 tags100ms158,34313,680,835,2008.3%147.4 MB
16 devices x 1,000 tags50ms314,65527,186,192,00020.1%148.0 MB
16 devices x 1,000 tags25ms344,62529,775,600,00025.9%147.8 MB
lightbulb
CPU and Memory Notes

CPU and RSS values shown here are the observed peak process values of the benchmark run itself. They are useful for comparing profiles, but they do not represent a full host-level capacity profile.


What Changes Throughput Between Deployments

Observed throughput changes significantly with workload shape:

  • Device count: more devices increase coordination and scheduling overhead
  • Tag density: more tags per device increase payload size and storage pressure
  • Polling profile: faster polling produces more write pressure
  • Retention and storage policy: write-heavy and long-retention setups need different sizing
  • Concurrent analytics: dashboards, queries, and exports share system resources with ingestion

For this reason, two deployments with the same daily operation volume can behave very differently, especially once real devices and real networks are introduced.


Benchmark Limits

This benchmark does not answer every performance question.

It does not by itself define:

  • query performance under mixed read/write load
  • retention cost over long historical windows
  • multi-gateway aggregation limits
  • the exact maximum of every protocol or device mix
  • real hardware read performance

If a deployment is expected to operate near the benchmark envelope, run a workload-specific validation with the expected device count, tag profile, polling cadence, network conditions, and real hardware.

For public messaging, keep the wording tied to simulated-input pipeline performance rather than real-device performance.