• Redis is in-memory (RAM) key-value data storage.
  • Supports data structures such as strings, lists, maps, sets, and sorted sets.

Redis persistence

  • Persistence: just a fancy word for saving copy of data, a backup of some sort.

Redis has different persistence options:
1. RDB persistence: which is basically snapshot of dataset at specified intervals. In other words, Redis takes snapshot of current items and saves them in file.
2. AOF persistence: which logs every write operation received by server. This file will be run again at server startup while reconstructing original dataset.
3. Disable persistence: data will exist as long as the server is running.
4. RDB + AOF: combining both persistence options in same Redis instance. When Redis restarts, the AOF file will be used to reconstruct original dataset since it is guaranteed to be the most complete.

RDB Persistance

Redis saves snapshots of dataset on disk. This snapshot is binary file called dump.rdb. You can configure Redis to save dataset every X seconds but only if there are at least Y changes in dataset, or you can manually call SAVE or BGSAVE commands which will do the snapshot. For example, command save 60 1000 will make Redis dump dataset to disk every 60 seconds but only if at least 1000 keys have changed. This process is called snapshotting. Let’s explain how Redis does this. First off, Redis forks and we have parent and child processes. Child process starts to write dataset to temporary RDB file. When child is done writing to new RDB file, it replaces the old one with new dump.rdb.


  • RDB is compact, single file, point-in-time representation of Redis data.
  • RDB files are perfect for backups
  • RDB file is good for disaster recovery because, being compact, it can be transferred to far data centers fast.
  • RDB maximizes Redis performance because the only work parent process needs to do is to fork a child process and let child do the rest. The parent process will never perform disk I/O operations.
  • RDB allows faster restarts with big datasets compared to AOF file which runs slower.


  • RDB is not good if you need to minimize chance of data loss in case Redis stops working. Normally, you would put RDB to create snapshot every 5 minutes or so, but be prepared to lose those writes from last few minutes when power outage or disaster happens.
  • RDB needs to fork often in order to save on disk using the child process. Fork is expensive and time consuming if dataset is big. It may result in Redis to stop serving clients for second if dataset is very big and CPU performance not so great. AOF also needs to fork but you can tune how often you want to rewrite your logs.

AOF Persistence

  • Snapshotting is not durable because if system stops, power outage happens, or you accidentally kill -9 your instance, then latest data written on Redis will get lost because we did not manage to create latest snapshot. To prevent this from happening, we can use AOF file. To enable AOF file logging, you have to set appendonly yes in Redis configuration file. From now on, any change to dataset will append to AOF and when you restart Redis it will run AOF to rebuild the state.
  • AOF log gets bigger and bigger as write operations are performed. Whenever you issue BGREWRITEOF command, Redis will write shortest sequence of commands needed to rebuild current dataset in memory.
  • We can configure how often Redis will sync data on disk. There are three options:
    1. appendfsync always – sync to disk every time new command is appended to AOF. This is very slow but very safe
    2. appendfsync everysec – in case of disaster, only new commands from last second are lost.
    3. appendfsync no – never force sync to disk. However, leave this to OS to decide what to do with data. This is faster and less secure method. Normally, Linux will flush data every 30 seconds with this configuration.
  • The way AOF file is created is somewhat similar to creating RDB file. First off, Redis forks and creates child process. Child process starts writing new AOF log in temporary file while parent accumulates all new changes in in-memory buffer. At the same time, parent process writes new changes in OLD AOF file so if child process which does the backup part fails, we still have secure dataset in old AOF file (smart eh?). When child prcess is done rewriting the file, parent gets a signal child is done and appends in-memory buffer at the end of newly generated file by child. Now, Redis renames old file into new one and starts appending new data to new AOF file.


  • Allows to set different sync policies: no sync at all, sync every second, sync at every query.
  • Default policy is to sync every second
  • AOF log file is append only log
  • If log ends with half-written command (because of full disk for example) the redis-check-aof-tool can fix that.
  • Redis can automatically rewrite AOF in background when it gets too big.
  • AOF contains log of all operations, one after other and you can export it to AOF file.


  • AOF files get bigger than RDB files for same dataset.
  • AOF can be slower than RDB. However, when sync is off, it should be exactly as fast as RDB even under high load.

How to fix corrupted AOF file?

  • If AOF file is corrupted somehow and Redis complains on startup, best thing to do is to run redis-check-aof utility, understand the problem, jump to given offset in file and try to manually edit/repair file. This utility can fix the problem, but when he sees corrupted line and fixes it, all newer data from that entry will get discarded leaving you with massive hole in your dataset

So, which one to use?

  • Use RDB if you can live with few minutes of data loss in case of disaster.
  • Never use AOF alone. Use it in combination with RDB.

Redis Replication

Redis has master and replicas. Generally, client writes to master and that data is asynchronously replicated to its own replicas. From now on, key:value pairs are identical on both master and its replicas and clients can read data from them. Usually, replicas are read-only and client cannot write to them. The only way replicas are populated are through master with replication. It is also practical to place some fail-over service behind master and replicas which will keep track if some service goes down. For example, Sentinel monitoring can track if master goes down and then automatically promote one of replicas to become new master. This will keep service up and running and clients will be served happily.

Redis replication is asynchronous. What this means is, when client writes to master, replicas do not get that data instantly. All the client is interested in, is to write data to master and what happens next, that is if data gets replicated or not, is not of its concerns. Clients job is done and masters role is to replicate it or not.

Generally, number of replicas one master can have is not limited. You can easily place dozens of replicas if needed and master will accept them. Also, by default replicas are read-only instances which means clients cannot write data to them. You can change this feature with replica-read-only configuration parameter in redis.conf or using CONFIG SET from redis-cli. Instead, clients write data to master which syncs that data to its replicas with one of replication methods. However, you can configure replica to accept writes but those writes from client to that replica are local. This means whatever is written to that replica stays there and is not replicated to other replicas or even master whereas when you write to master data goes to all replicas. Furthermore, data written to replica can get overwritten when next replication comes from master. For example, key on replica matches with key that is sent from master, so it gets overwritten with master’s key.

Why replication is used anyway? High availability is first reason on why to use replication. We want data to be accessible by clients all the time. Second reason is spreading read load. For instance, one replica could not handle the requests for data and to help it out we create another replica which will have same data and serve it to clients.

What are the types of replication? One is called full sync and second is called partial sync.
1. Full sync replication: It basically takes entire data from master and creates a dump file which is transferred to replica as a chunk. The way master does this is to fork itself and create identical process. Then, child process will start the full sync process and transfer the dump file over to replica. At the same time, clients are flawlessly writing/reading from parent master process. Also, the creation of that dump file is interesting and it can be done in two ways. Either, master can create a dump and save it on hard disk and then send to replica (this method was default on previous Redis versions) or more interestingly it can send dump over the network as it creates the file (used in newer Redis versions, so called diskless transfer). Even though master does not need disk to transfer dump file to replica over the network, replica needs a disk to accept that file and store it on drive. Then, file gets used to reconstruct dataset and become synced with its master. Diskless replication can be enabled with repl-diskless-sync configuration parameter.
2. Partial sync replication: With Redis versions 2.8 and newer, Redis includes partial sync replication. This replication type is generally used when there is minor link break between master and replica, thus leaving those two few moments/commands behind each other. If configured, replica reconnects to master and ask master to resynchronize without having to dump the whole dataset. To do this, master uses replica client output buffer which is basically a bucket where last commands/writes are stored so that, when replica which is a bit late asks for them, master can send to replica data from that buffer thus replica gets in sync. This buffer is limited in size and can be configured with repl_backlog_active:1 and repl_backlog_size:10000 (in bytes) in redis.conf file. If replica is too late behind the master, the backlog may not contain all data. In this case, a normal full sync replication is done and full redis in-memory content dump is sent over to replica as a whole.Master-Slave replication allows replica instance to have exact copies of master instances. Replica will automatically reconnect to master every time link breaks and attempt to be exact copy of master, regardless of what happens to the master. When master and replica instances are well-connected, then master keeps replica updated by sending stream of commands to replica. Master does this in order so whatever happens in master gets sent in order to replica.

When link between master and replicas breaks (for network issues for example), replica then reconnects and attempts to proceed with partial resynchronization. This means that replica will try to obtain part of commands that missed during the disconnection. When partial resynchronization is not possible, replica will ask for full synchronization. This will involve more complex process where master has to create snapshot of all its data as explained in full sync replication. Then, master sends that snapshot to replica and then continue sending stream of commands to replica after the snapshot. Important to say again, with the help of forking, master can continue to handle queries when one or more replicas perform full or partial synchronization.

While replica can perform full synchronization, it can handle reads using old version of dataset (assuming you configured redis to do so in redis.conf). Otherwise, you can tell replicas to return error to client if replication stream is down. After initial sync, old dataset must be deleted and new one must be loaded. While loading new dataset, replica will block incoming connections during this brief time (can be few seconds for very large datasets).

Typical technique involves configuring master to avoid persisting to disk at all. However, replica is configured to persist dataset from time to time or to have AOF enabled. Important thing here is, if master gets restarted, it will start with empty dataset and if replica tries to sync with it, replica will be empty as well thus leaving you with no data to persist. This is dangerous if master has persistence turned off. Consider this example, master has no persistence and two replicas are replicating from master. When master crashes it has auto-start system that restarts the process. After Redis is loaded again, its dataset is empty and if replicas try to replicate dataset from master, empty dataset from master will overwrite dataset on replicas, leaving you without data.

Set replica to authenticate to master

  • If master has password via requirepass directive from redis.conf, configure replica to use that password in all sync operations. To do so, use redis-cli and type CONFIG SET MASTERAUTH <password>. To set it permanently, add masterauth <password> to redis.conf config file.

Allow writes only with X attached replicas:

  • It is possible to configure Redis master to accept write queries only if at least X replicas are currently connected to master. However, because Redis uses asynchronous replication (that is partial synchronization), it is not possible to ensure replica actually received given write, so there is small window for data loss.
  • To enable this feature, you should do this:
    1. Redis replicas ping master every second acknowledging the amount of replication stream processed
    2. Redis master will remember the last time it received PING from every replica
    3. User can configure minimum number of replicas that have a lag not greater than maximum number of seconds
  • if there is at least X replicas, with lag less than Y seconds, then accept writes.
  • if conditions are not met, master will reply with error and write will not be accepted.
  • there are two configuration parameters for this feature:
    1. min-replicas-to-write <number of replicas>
    2. min-replicas-max-lag <number of seconds>


  • Tells Redis to start creating new AOF file. This command will create temporary small version of current AOF. If BGREWRITEAOF fails, no data gets lost as old AOF file will be untouched. This rewrite will be triggered by Redis only if there is not already background process doing persistence. If Redis child is creating snapshot on disk, AOF rewrite is scheduled but not started until child producing RDB file finishes. Use INFO to see if AOF rewrite is scheduled. If AOF rewrite is already in progress, command returns error and no AOF rewrite will be scheduled


  • Saves dataset in background. Redis forks, parent continues to serve clients and child saves dataset on disk, and once done it exits. If BGSAVE SCHEDULE is used, command returns OK right away and AOF rewrite is in progress and schedules background save at next opportunity. Check if command succeded with LASTSAVE command


  • command performs synchronous save of dataset producing point in time snapshot of all data inside Redis instance, in form of RDB file. It will block other clients when issued, so do not use this in production environment. Instead, use BGSAVE as it allows clients to access dataset. The SAVE command is OK to use but only as last resort when BGSAVE cannot create child process to make a RDB file in background.

Analyzing redis-cli directives

These directives are from redis.conf file. Also, /var/run/redis/ directory holds all instances and their PIDs. The redis.pid holds PID of master instance

pidfile /var/run/redis => directive that tells where to store file which has PID of that instance. If pidfile is given, redis writes its own PID there at startup and removes that file at service exit. If server is not daemonized, no PID file is created.

include /path/to/other/conf => directive that specifies path of includes file. Includes file is simply inserted in redis.conf when called. We can put some directives inside include file and call those directives within redis.conf. Try not to mistakenly insert some random file there as it will be parsed by redis, thus will fail.

loadmodule /path/to/module.so => it loads a module at start. If server isn’t able to load module, it will abort. We can use multiple loadmodule directives.

bind => listen on this port. If no bind is set, redis listens for connections from all network interfaces available on the server. It is also possible to listen to just one or multiple selected interfaces with bind directive.

protected-mode => when on, and when server is not binding to a set of addresses using bind directive, and no password is configured, then server accepts connections from clients connecting from IPv4 and IPv6 loopback addresses.

port 6379 => Accept connections on this port. If set to 0, redis will not listen on TCP socket.

unixsocket /var/run/redis.sock => path for unix socket that will be used to listen for incoming connections.

timeout 300 => close connection after client is idle for 300 seconds. Put 0 to disable closing connections even if client is idle for really long time.

tcp-keepalive 0 => period used to send ACKs. ?


daemonize yes => By default redis does not run as daemon/service (in background). Put yes here to run it in background as service/daemon. When daemonized, redis will write a PID file in /var/run/redis.pid (specified by pidfile directive)

loglevel notice => Here, you can choose between four log levels.
1. debug – spits a lot of info
2. verbose – shows rarely useful info
3. notice – good for production environment
4. warning – only very important and critical messages are logged

logfile "" => specify log file name. If empty (” “) it tells redis to show messages on STDOUT. If empty string and redis is daemonized it will send STDOUT to /dev/null (basically turns off logging)

databases 16 => set number of databases. Default is 0

always-show-logo yes => don’t print ASCII logo art at startup


save 20 50 => save DB on disk each 20 seconds but only if at least 50 keys have been changed.

save “” => remove all previously configured save points

stop-writes-on-bgsave-error no => if RDB snapshots are enabled and latest background save failed, redis will stop accepting writes. When BGSAVE process starts working again, redis will allow writes again.

rdbcompression yes => compress strings with LZF when dumping .rdb databases. If set to no, dataset will be bigger in size.

rdbchecksum yes => CRC64 checksum is placed at the end of file. This makes format resistant to corruption but it increases performance by 10% when saving and loading RDB files in order to compute the checksum.

dbfilename dump.rdb => if RDB persistence is enabled, the file will be called dump.rdb.

dir /usr/local/redis/db => RDB and AOF files will be written inside this directory with filename specified with dbfilename.


replicaof <masterIP> <masterPORT> => Makes this instance replica of master, meaning it is exact copy of its master.

Leave a Reply