Insert benchmarks with inch: InfluxDB vs VictoriaMetrics

Aliaksandr Valialkin
4 min readJan 27, 2019

--

Recently VictoriaMetrics gained Influx line protocol support for time series’ data ingestion. It maps field names into metric names, while measurement names go to “measurement” label values. This streamlined apples-to-apples insert performance comparison for VictoriaMetrics and InfluxDB. The post exposes benchmark results for various number of unique time series (aka various cardinality), various number of points per insert request and various number of tags per point.

Benchmark tool

InfluxDB has a nice tool for measuring insert performance — influxdata/inch. This tool allows setting the following parameters among many others:

  • The number of tags and the number of distinct values for each tag.
  • The number of fields for each point.
  • The number of points per batch sent to the server.

These parameters can be tuned for simulating various real-world conditions such as the number of unique time series, the size of request sent to the server.

Setup

All the tests were run on the following hardware:

  • CPU: i7–7500U
  • RAM: 16GB
  • Disk: 512GB SSD

InfluxDB version: 1.7.3. Docker image has been pulled from this repo.

VictoriaMetrics version: 1.6.2. Docker image has been pulled from this repo.

The number of concurrent clients: 4.

Total number of inserted values during each test: 30M.

The following time series cardinalities were tested: 1, 10, 100, 1K, 10K, 100K, 1M, 2M, 3M, 4M and 10M.

The following batch sizes were tested: 100, 1K and 10K.

The following number of fields were tested: 1 and 10.

The following number of tags were tested: 2 and 10.

Both client (influxdata/inch) and server (either VictoriaMetrics or InfluxDB) were run on the same hardware.

Benchmark results

Let’s start with 100 points per request (aka batch size: 100):

Noticeable things:

  • VictoriaMetrics outperforms InfluxDB on 1–2M cardinalities by 4x-5x.
  • InfluxDB’s performance drops significantly on 3M and 4M cardinalities. InfluxDB used high iowait CPU share on these cardinalities.
  • InfluxDB didn’t finish 10M cardinality test, since it required more RAM than the computer had (more than 16GB).
  • Insert performance degrades for both TSDBs when cardinality increases.

Next, go to 1K points per request:

Noticeable things:

  • Insert performance increased with bigger batch size. VictoriaMetrics reached 1M points per second.
  • While the gap between VictoriaMetrics and InfluxDB performance shortened to 2.5x on cardinalities 1–100K, VictoriaMetrics still outperforms InfluxDB by 7.5x on 4M cardinality.
  • InfluxDB didn’t finish 10M cardinality test because of high RAM requirements.

Let’s look at RAM usage for various cardinalities in order to understand why InfluxDB cannot finish 10M cardinality test on 16GB RAM:

As you can see, RAM requirements for VictoriaMetrics and InfluxDB are on par for low cardinalities up to 100K. After that InfluxDB RAM appetite skyrockets to 5GB for 1M unique time series and reaches 9GB for 4M unique time series. VictoriaMetrics uses 850MB RAM for 1M cardinality and 4GB for 10M cardinality. This means VictoriaMetrics may process 10x more distinct time series comparing to InfluxDB on the same amount of RAM.

Now go to batches with 10K points:

The performance increases a bit comparing to batches with 1K points.

All the previous tests were run on points with a single field. Let’s look at how performance scales with more fields.

10 fields per point lead to nice speedup — now VictoriaMetrics reaches 3.6M inserted values per second, while InfluxDB reached 1.5M inserted values per second.

Unfortunately, InfluxDB couldn’t fit more than 2M unique time series into 16GB RAM, so 3M, 4M and 10M cardinalities have no results for InfluxDB :(

And the last chart is for the increased number of per-point tags — from 2 to 10:

Increased number of tags means slower inserts for both VictoriaMetrics and InfluxDB. Additionally, InfluxDB couldn’t cram more than 1M unique time series with higher number of tags into available RAM.

Conclusions

  • VictoriaMetrics has better insert performance than InfluxDB in all the tests. The performance gap between VictoriaMetrics and InfluxDB increases with higher cardinality.
  • VictoriaMetrics uses less RAM than InfluxDB on high cardinality time series.
  • It is easy to reproduce benchmark results— just run inch tool against docker containers with VictoriaMetrics and InfluxDB on your hardware. Post your results in comments.

Raw benchmark results from this post are available in this spreadsheet. As for the select performance, see this spreadsheet. In short, VictoriaMetrics outperforms InfluxDB in all the queries, especially on heavy queries touching millions of data points and thousands of time series.

Though VictoriaMetrics’ main purpose is the best long-term remote storage for Prometheus, its’ single-server version still can substitute InfluxDB for collecting data from Influx-compatible agents such as Telegraf. VictoriaMetrics supports native PromQL, so simpler yet powerful queries could be used for building graphs from influx data comparing to InfluxQL or Flux.

Read this article for more details about VictoriaMetrics.

Update: VictoriaMetrics is open source now!

--

--