Prometheus vs VictoriaMetrics benchmark on node_exporter metrics

Benchmark setup

The benchmark was run in Google Compute Engine on four machines (instances):

  • An instance with node_exporter v1.0.1 for scraping. It was run on e2-standard-4 machine with the following config: 4vCPU, 16GB RAM, 1TB HDD persistent disk. Initial tests revealed that the node_exporter cannot process more than a few hundred requests per second. Prometheus and VictoriaMetrics were generating much higher load on the node_exporter during tests. So it has been decided to put nginx in front of node_exporter with one-second response caching. This lowered load on node_exporter to reasonable values, so it could process all the incoming requests without scrape errors.
  • Two dedicated e2-highmem-4 instances for Prometheus v2.22.2 and VictoriaMetrics v1.47.0 with the following configs: 4vCPU, 32GB RAM, 1TB HDD persistent disk. Both VictoriaMetrics and Prometheus were run with default configs except of the path to the file with scrape configs (i.e. -promscrape.config=prometheus.yml for VictoriaMetrics and — config.file=prometheus.yml for Prometheus). The prometheus.yml file has been generated from the following Jinja2 template:
scrape_interval: 10s
- job_name: node_exporter
{% for n in range(3400) %}
- targets: ['host-node-{{n}}:9100']
host_number: cfg_{{n}}
role: node-exporter
env: prod
{% endfor %}
  • An e2-standard-2 machine for monitoring VictoriaMetrics and Prometheus. VictoriaMetrics instance on this machine has been configured for scraping app-specific metrics and node_exporter metrics from machines with VictoriaMetrics and Prometheus. Graphs below were built from these metrics.
  • node_exporter is the most widespread exporter, which is scraped by the majority of Prometheus installations.
  • node_exporter exports real-world metrics (CPU usage, RAM usage, disk IO usage, network usage, etc.) under load, so benchmark results could be extrapolated to production Prometheus setups.

Storage stats

Let’s look at storage stats, which is the same for both VictoriaMetrics and Prometheus:

  • Ingestion rate: 280K samples/sec
  • Active time series: 2.8 million
  • Samples scraped and stored: 24.5 billion

Benchmark results

Disk space usage:

Disk space usage: VictoriaMetrics vs Prometheus
  • VictoriaMetrics: 7.2GB . This translates to 0.3 bytes per sample (7.2GB/24.5 billion samples).
  • Prometheus: 52.3GB (32.3GB data plus 18GB WAL). This translates to 52.3GB/24.5 billion samples = 2.1 bytes per sample. This means that Prometheus requires up to 7 times (2.1/0.3) more storage space than VictoriaMetrics for storing the same amounts of data.
Disk IO: bytes written per second: VictoriaMetrics vs Prometheus
Disk IO: bytes read per second: VictoriaMetrics vs Prometheus
CPU usage, vCPU cores: VictoriaMetrics vs Prometheus
  • 1.5–1.75 of vCPU cores are used by both systems for scraping 3400 node_exporter targets. This means that 4vCPU system has enough capacity for scraping additional 4000 node_exporter targets.
  • CPU usage spikes for both systems are related to background data compaction. These spikes are mostly harmless for VictoriaMetrics, while they may result in OOM (out of memory) crashes for Prometheus as explained below. See technical details about background compaction (aka merge) in VictoriaMetrics at these docs.
RSS Memory usage: VictoriaMetrics vs Prometheus


Both Prometheus and VictoriaMetrics can scrape millions of metrics from thousands of targets on a machine with a couple of vCPU cores. This is much better result comparing to InfluxDB or TimescaleDB systems according to these benchmarks.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aliaksandr Valialkin

Aliaksandr Valialkin

Founder and core developer at VictoriaMetrics