Why Performance Matters (in Object Storage)

Object stores are slow? The time of high performance object storage is coming.
Guillaume Delaporte
Guillaume Delaporte
VP Sales at OpenIO

In the last couple of years we have talked many times about flexibility, mostly made possible by ConsciousGrid technology. I think it is now time to talk about performance.

Object stores are slow

Most object stores on the market are pretty slow. Relegated to secondary data workloads, they can provide some throughput but latency is high and IOPS are usually very low. We can’t really talk about IOPS for an object store, but the number of metadata and data operations per second is an important metric.

The performance of object stores is not constant. Depending on the size of the cluster and the algorithm used for managing the distributed hash table that stores object positions in the cluster, the time needed to access a file varies at any given query.

The rigidity of some object stores is also visible when different types of data have to be dealt with at the same time. For example, file size can impact efficiency, hence the performance of an object store. Not all the solutions treat data in the same way; write amplification in the back-end could become an issue if the object store is designed to deal with large files and protect them with EC while the user also stores small files. In fact, EC has some limitations when it comes to small files.

In addition to this, some object store architectures are highly susceptible to configuration changes (intentional, like an expansion of the cluster, or unwanted, like a node failure), which usually lead to data and metadata rebalances that impact overall system performance.

Not your father's object store

OpenIO is different, and it usually shows better performance than competitors (we have registered up to 3x improvements in real PoCs with customers when competing with other open source solutions, and we are 50% better than the closest competitor on the same hardware).

But how do we get that performance?

First of all OpenIO has a unique design that is particularly lightweight and efficient. Developed in C language, it is very resource efficient and can run with just 400MB and 1 ARM CPU core. (In fact, we support ARM in production and on Raspberry Pis for our community.) The ability to run on a such a light configuration doesn't mean that we can’t take full advantage of the resources available in large x86 nodes. Quite the contrary: additional CPU and RAM are more than welcome for caching data and metadata and to speed up any type of operation!

  • OpenIO  doesn't use CHORD-like algorithms to manage its distributed hash table, but a system of directories that provides a very predictable number of ops (three) to get to the data. No matter the size of the cluster or the capacity, OpenIO  will always deliver every object in the same amount of time.

  • OpenIO  provides a dynamic data protection mechanism that decides automatically, on the fly, the best protection for each object stored in the system thanks to policies decided by the end user. Because of this optimization, the system always choses the most efficient way to save data, which is also the fastest.

  • OpenIO  doesn't rebalance data when you add or remove a node. ConsciousGrid technology works with the resources at hand and picks the best, most available locations for data. Contrary to what happens with the vast majority of object stores, which can only work with a fixed load-balancing mechanism, this type of dynamic local balancing greatly improves performance without sacrificing data protection.

A couple of examples

  1. Not long ago we published a case history about IIJ who was using OpenIO for his email platform (hosting millions of emails and storing up to 5,500 mails/sec). The cluster is all flash and is replicated remotely for DR with RTO and RPO of less than 5m. This is a small cluster in capacity; emails are pretty small, but the number of objects is comparable to what you can get from primary storage.
  2. Another story comes from Relex who is using OpenIO as persistent storage for a 100TB in-memory database. This is an impressive use case in my opinion, especially because all the commints from the DB land in the object store, and, in case of a reboot of the servers, they have to be read very quickly to repopulate the DB.

Key Takeaways

Object storage is cool; fast object storage is way better! It opens many possibilities and allows users to think of many more workloads that can be run on it.

Being fast also means being more efficient; there is less latency waiting for data, making it possible to save other datacenter resources such as CPU cycles.

OpenIO has already proven to be capable of reaching amazing speeds, and we are working constantly to optimize it and go even further. We have already passed the 850Gb/sec mark in a test, and we are working to further optimize some components and reach 1TB/s soon.

Guillaume Delaporte
Guillaume Delaporte
VP Sales at OpenIO
Guillaume has extensive experience in building and running large storage platforms, which he gained as system engineer and project leader at Atos Worldline, before co-founding OpenIO in 2015.
All posts by Guillaume