Configuring and Running Redis Cluster on Linux

Redis Cluster was released on April 1st, 2015. Since then Redis Cluster is the preferred way to get automatic sharding and high availability.

In this article you’ll see step by step tutorials about how to install and configure Redis Cluster on Linux.


Run the following on your Linux environment:

tar xzf redis-3.2.1.tar.gz
cd redis-3.2.1

The binaries will become available in src directory.
In order to install Redis binaries into /usr/local/bin just use this command:

% make install

You can use “make PREFIX=/some/other/directory install” if you wish to use a different destination.

make install PREFIX=/export/home/rightv/redis

Redis cluster topology

Minimal cluster that works as expected requires to contain at least three master nodes.
Redis recommendation is to have at least one slave for each master.

  • Minimum 3 machines
  • Minimum 3 Redis master nodes on separate machines (sharding)
  • Minimum 3 Redis slaves, 1 slave per master (to allow minimal fail-over mechanism).

In our example we will use 3 master and 3 slave nodes accordingly.


In case if one of masters goes down, his slave will become a master. It is possible to add/remove nodes on the fly.

Redis Configuration File (redis.conf)

Each Redis node has its own configuration file (redis.conf). In order to run Redis in cluster mode we must set cluster-enabled yes for each node.

Here is our redis.conf file. Notice that port and bind values will be different for each node. I set port 7000 for each master and port 7001 for each slave in this example.

port 7000

cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-slave-validity-factor 1

logfile redis.log
loglevel notice
slowlog-log-slower-than 10000
slowlog-max-len 64
latency-monitor-threshold 100

maxmemory 64mb
maxmemory-policy volatile-ttl
slave-read-only yes

save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbchecksum yes
dbfilename dump.rdb

appendonly yes

Once we have configured redis.conf for each redis instance (node) we can launch them:

./redis-server redis.conf

IMPORTANT: Start each redis instance only from directory where redis.conf is located!
For example: if our master located here: /home/codeflex/redis/7000 then we start the redis instance from that location.

Once you’ve performed this you will notice that in each redis directory will be created dump.rdb and nodes.conf files.
dump.rdb contains database recovery information and nodes.conf stores redis nodes configuration. You should not edit these files.

Redis Slots Allocation

There are 16384 hash slots in Redis Cluster, and to compute what is the hash slot of a given key, we simply take the CRC16 of the key modulo 16384.
Every node in a Redis Cluster is responsible for a subset of the hash slots, so for example you may have a cluster with 3 nodes, where:

  • Node A contains hash slots from 0 to 5500.
  • Node B contains hash slots from 5501 to 11000.
  • Node C contains hash slots from 11001 to 16383.

This allows to add and remove nodes in the cluster without downtime.
For example if I want to add a new node D, I need to move some hash slot from nodes A, B, C to D. Similarly if I want to remove node A from the cluster I can just move the hash slots served by A to B and C. When the node A will be empty I can remove it from the cluster completely.

In order to allocate slots we need to run following commands in bash:

for i in {0..5400}; do ./redis-cli -h -p 7000 CLUSTER ADDSLOTS $i; done
for i in {5401..10800}; do ./redis-cli -h -p 7000 CLUSTER ADDSLOTS $i; done
for i in {10801..16383}; do ./redis-cli -h -p 7000 CLUSTER ADDSLOTS $i; done

Then if you’ll run this command you can verify that slots been allocated successfully:

redis-cli -h -p 7000 CLUSTER NODES

Output: e898985408ad6a49d714b82c1e157276d4681ae9 myself,master - 0 0 1 connected 0-5400

Cluster formation

Now when all the 16384 hash slots are allocated and all 6 instances are running we need to tell them to meet each other. Thus every node will “know” everyone.

redis-cli -h -p 7000 CLUSTER MEET 7001
redis-cli -h -p 7000 CLUSTER MEET 7000
redis-cli -h -p 7000 CLUSTER MEET 7001
redis-cli -h -p 7000 CLUSTER MEET 7000
redis-cli -h -p 7000 CLUSTER MEET 7001

Configure Replication

Now, when all nodes are connected into cluster we need set replications. In other words we need to define what nodes are masters and what nodes are their slaves:

./redis-cli -h -p 7001 CLUSTER REPLICATE e898985408ad6a49d714b82c1e157276d4681ae9
./redis-cli -h -p 7001 CLUSTER REPLICATE 93e74c3c5cc0328755ba24cbc9b166439dac4e93
./redis-cli -h -p 7001 CLUSTER REPLICATE bced1d26ffc236fdaeffe9864f155c7a55a0a3b0

If you will run redis-cli -h -p 7000 CLUSTER NODES you will see something like this:

974502302670a8aa78280c9d414ba11b9a8e1133 slave e898985408ad6a49d714b82c1e157276d4681ae9 0 1461157218265 2 connected
837408b39322813f09ff285c6e8545c9533386f6 slave 93e74c3c5cc0328755ba24cbc9b166439dac4e93 0 1461157217763 5 connected
bced1d26ffc236fdaeffe9864f155c7a55a0a3b0 master - 0 1461157216260 4 connected 10923-16383
e898985408ad6a49d714b82c1e157276d4681ae9 myself,master - 0 0 1 connected 0-5461
93e74c3c5cc0328755ba24cbc9b166439dac4e93 master - 0 1461157217765 5 connected 5462-10922
fe417e558b7f9a7c99e56f4f825980940c9ed589 slave bced1d26ffc236fdaeffe9864f155c7a55a0a3b0 0 1461157217763 4 connected

Verify Redis Cluster is working properly

In order to check if our Redis cluster working properly lets insert some key-value:

./redis-cli -h -p 7000 -c > set somekey somevalue

-> Redirected to slot [11058] located at

When you insert some data into Redis cluster special hash function calculates the appropriated slot for it.
You can see from output that despite we referred to node on Redis automatically put the data on corresponding node ( This means that our Redis cluster is working properly.

Fail-over Procedure

Our cluster configuration consists of 3 servers with one master and one slave of different index on each server.


When one of the servers goes down then Redis Cluster will perform automatic failover process that will force the slave to become a master instead of failed master.
For instance let’s say that Server1 goes down, then after fail-over process performed (usually couple of seconds) and Server 1 was restarted the Redis Cluster structure becomes like this:


The Cluster is in working mode, but in this case we have potentially dangerous situation when on the same server there are two masters (Server2)
This is the case when administrator should perform an after failover procedure which will rearrange the Cluster structure in such way that on each server will be one single master and a slave of some other master.

The procedure is following:

  1. On Server2 administrator need to stop the M1 redis instance: service redis7001 stop
  2. After a couple of seconds ensure that cluster failover mechanism took place – S1 became a master (M1) with arranged slots. (note that cluster will be in failure state during this process)
  3. Start redis instance on Server2: service redis7001 start
  4. Ensure that second redis instance on Server2 became a slave of the first instance on Server1 and that the cluster structure returned to its initial state

Redis Java Client

Redisson is my preferred Java Redis client since it supports Cluster server mode with automatic server discovery and Spring cache integration. Also, I received really fast and reliable support when I found some minor bugs.
Redisson is based on high-performance async and lock-free Java Redis client and Netty 4 framework.
Redis 2.8+ and JDK 1.6+ compatible and licensed under the Apache License 2.0.
Some other Redisson features:

  • Spring cache integration
  • AWS ElastiCache servers mode:
    • automatic new master server discovery
    • automatic new slave servers discovery
  • Cluster servers mode:
    • automatic master and slave servers discovery
    • automatic new master server discovery
    • automatic new slave servers discovery
    • automatic slave servers offline/online discovery
    • automatic slots change discovery
  • Master with Slave servers mode: read data using slave servers, write data using master server
  • Single server mode: read and write data using single server
  • Supports auto-reconnect
  • Supports failed to send command auto-retry
  • Supports many popular codecs (Jackson JSON, CBOR, MsgPack, Kryo, FST, LZ4, Snappy and JDK Serialization)

If you have any questions, please write us in the comments section bellow.