Monday, December 29, 2014

Implementing Switchover/Switchback in PostgreSQL 9.3.

This post educates sophisticated DBA's on how to setup graceful Switchover and Switchback environment in PostgreSQL high availability. Firstly, thanks to patch authors Heikki and Fujii for making Switchover/Switchback easier in PostgreSQL 9.3.(Pardom me if I missed other names).

Let me attempt to illustrate it in short prior to these patches, all of you know Standby's are critical components in achieving fast and safe disaster recovery. In PostgreSQL, recovery concept majorly deals with timelines to identify a series of WAL segments before and after the PITR or promotion of Standby to avoid overlapping of WAL segments. Timeline ID are associated with WAL segment file names(Eg:- In $PGDATA/pg_xlog/0000000C000000020000009E segment "0000000C" is timeline ID). In Streaming Replication both Primary and Slave will follow the same timeline ID, however when Standby gets promotion as new master by Switchover it bumps the timeline ID and old Primary refuses to restart as Standby due to timeline ID difference and throw error message as:
FATAL:  requested timeline 10 is not a child of this server's history
DETAIL:  Latest checkpoint is at 2/9A000028 on timeline 9, but in the history of the requested timeline, the server forked off from that timeline at 2/99017E68.
Thus, a new Standby has to be built from scratch, if the database size is huge then a longer time to rebuild and for this period newly promoted Primary will be running without Standby. There's also other issue like, when Switchover happens Primary does clean shutdown, Walsender process sends all outstanding WAL records to the standby but it doesn't wait for them to be replicated before it exits. Walreceiver fails to apply those outstanding WAL records as it detects closure of connection and exits.

Today, with two key software updates in PostgreSQL 9.3, both of the issues addressed very well by authors and now Streaming Replication Standby's follow a timeline switch consistently. We can now seamlessly and painlessly switch the duties between Primary and Standby by just restarting and majorly reducing rebuild time of Standby.

Note: Switchover/Switchback not possible if WAL Archives are not accessible to both servers and in Switchover process Primary database must do clean shutdown(normal or fast mode).

To demo, lets start with setup of Streaming Replication(wiki to setup SR) which I have configured in my local VM between two clusters (5432 as Primary and 5433 as Standby) sharing a common WAL archives location, because both clusters should have complete access of sequence of WAL archives. Look at the snapshot shared below with setup details and current timeline ID for better understanding of concept.

At this stage everyone must have a solid understanding that Switchover and Switchback are planned activities.  Now SR setup in place we can exchange the duties of primary and standby as shown below:

Switchover steps:

Step 1. Do clean shutdown of Primary[5432] (-m fast or smart)
[postgres@localhost:/~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data stop -mf
waiting for server to shut down.... done
server stopped
Step 2. Check for sync status and recovery status of Standby[5433] before promoting it:
[postgres@localhost:/opt/PostgreSQL/9.3~]$  psql -p 5433 -c 'select pg_last_xlog_receive_location() "receive_location",
pg_last_xlog_replay_location() "replay_location",
pg_is_in_recovery() "recovery_status";'
 receive_location | replay_location | recovery_status
------------------+-----------------+-----------------
 2/9F000A20       | 2/9F000A20      | t
(1 row)
Standby in complete sync. At this stage we are safe to promote it as Primary.
Step 3. Open the Standby as new Primary by pg_ctl promote or creating a trigger file.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ grep trigger_file data_slave/recovery.conf
trigger_file = '/tmp/primary_down.txt'
[postgres@localhost:/opt/PostgreSQL/9.3~]$ touch /tmp/primary_down.txt

[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5433 -c "select pg_is_in_recovery();"
 pg_is_in_recovery
-------------------
 f
(1 row)

In Logs:  
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  trigger file found: /tmp/primary_down.txt
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  redo done at 2/A0000028
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  selected new timeline ID: 14
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  restored log file "0000000D.history" from archive
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  archive recovery complete
2014-12-29 00:16:04 PST-26342-- [host=] LOG:  database system is ready to accept connections
2014-12-29 00:16:04 PST-31874-- [host=] LOG:  autovacuum launcher started
Standby has been promoted as master and a new timeline followed which you can notice in logs.
Step 4. Restart old Primary as standby and allow to follow the new timeline by passing "recovery_target_timline='latest'" in $PGDATA/recovery.conf file.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ cat data/recovery.conf
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=localhost port=5433 user=postgres'
restore_command = 'cp /opt/PostgreSQL/9.3/archives93/%f %p'
trigger_file = '/tmp/primary_131_down.txt'
[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data start
server starting
If you go through recovery.conf its very clear that old Primary trying to connect to 5433 port as new Standby pointing to common WAL Archives location and started.
In Logs:
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  database system was shut down at 2014-12-29 00:12:23 PST
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  restored log file "0000000E.history" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  entering standby mode
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  restored log file "0000000D00000002000000A0" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  restored log file "0000000D.history" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  consistent recovery state reached at 2/A0000090
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  record with zero length at 2/A0000090
2014-12-29 00:21:17 PST-32310-- [host=] LOG:  database system is ready to accept read only connections
2014-12-29 00:21:17 PST-32325-- [host=] LOG:  started streaming WAL from primary at 2/A0000000 on timeline 14
Step 5. Verify the new Standby status.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -c "select pg_is_in_recovery();"
 pg_is_in_recovery
-------------------
 t
(1 row)
Cool, without any re-setup we have brought back old Primary as new Standby.

Switchback steps:

Step 1. Do clean shutdown of new Primary [5433]:
[postgres@localhost:/opt/~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave stop -mf
waiting for server to shut down.... done
server stopped
Step 2. Check for sync status of new Standby [5432] before promoting.
Step 3. Open the new Standby [5432] as Primary by creating trigger file or pg_ctl promote.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ touch /tmp/primary_131_down.txt
Step 4. Restart stopped new Primary [5433] as new Standby.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ more data_slave/recovery.conf
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=localhost port=5432 user=postgres'
restore_command = 'cp /opt/PostgreSQL/9.3/archives93/%f %p'
trigger_file = '/tmp/primary_down.txt'

[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave start
server starting
You can verify the logs of new Standby.
In logs:
[postgres@localhost:/opt/PostgreSQL/9.3/data_slave/pg_log~]$ more postgresql-2014-12-29_003655.log
2014-12-29 00:36:55 PST-919-- [host=] LOG:  database system was shut down at 2014-12-29 00:34:01 PST
2014-12-29 00:36:55 PST-919-- [host=] LOG:  restored log file "0000000F.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG:  entering standby mode
2014-12-29 00:36:55 PST-919-- [host=] LOG:  restored log file "0000000F.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG:  restored log file "0000000E00000002000000A1" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG:  restored log file "0000000E.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG:  consistent recovery state reached at 2/A1000090
2014-12-29 00:36:55 PST-919-- [host=] LOG:  record with zero length at 2/A1000090
2014-12-29 00:36:55 PST-914-- [host=] LOG:  database system is ready to accept read only connections
2014-12-29 00:36:55 PST-929-- [host=] LOG:  started streaming WAL from primary at 2/A1000000 on timeline 15
2014-12-29 00:36:56 PST-919-- [host=] LOG:  redo starts at 2/A1000090
Very nice, without much time we have switched the duties of Primary and Standby servers. You can even notice the increment of the timeline IDs from logs for each promotion.

Like others all my posts are part of knowledge sharing, any comments or corrections are most welcome. :)

--Raghav

Saturday, December 13, 2014

Switchover/Switchback in Slony-I while upgrading PostgreSQL major versions 8.4.x/9.3.x

Every new release of PostgreSQL comes with a packed of exciting features. To benefit new features, database server should be upgraded. Choosing traditional upgrade paths like pg_dump/pg_restore or pg_upgrade requires a significant downtime of application. Today, if you are looking for minimum downtime upgrade path among major PostgreSQL versions with perfect rollback plan, then it will be accomplished by asynchronous Slony-I replication. Since Slony-I (know more about it here) has the capability to replicate between different PostgreSQL versions,OS and bit architectures easily, so upgrades are doable without requiring a substantial downtime. In addition, it has a consistent switchover and switchback functionality in its design.

IMO, while doing major version upgrades there should be a proper fallback plan because just in case application turn out to buggy or fail to perform well on upgraded version, then we should be able to rollback to older version immediately. Slony-I provides such functionality in the way of switchback. This post demonstrates, minimum downtime upgradation including switchover/switchback steps.

Before going to demo, one important step to be noted, earlier to PG 9.0.x version bytea datatype columns use to store data in ESCAPE format and later version its in HEX format. While performing switchback (newer version to older version), this kind of bytea format differences are not support by Slony-I hence ESCAPE format should be maintained through out upgrade duration, else you may encounter with an error:
ERROR  remoteWorkerThread_1_1: error at end of COPY IN: ERROR:  invalid input syntax for type bytea
CONTEXT:  COPY sl_log_1, line 1: "1     991380  1       100001  public  foo I       0       {id,500003,name,"A         ",b,"\\\\x41"}"
ERROR  remoteWorkerThread_1: SYNC aborted
To fix, no changes required on PG 8.4.x but on PG 9.3.5 bytea_output parameter should be set from HEX to ESCAPE as shown. We can set it at cluster-level ($PGDATA/postgresql.conf) or user-level (ALTER TABLE...SET), I have preferred to go with user-level changes.
slavedb=# alter user postgres set bytea_output to escape;
ALTER ROLE
Lets proceed with upgrade steps. Below are my two versions server details used in this demo, change it accordingly as per your server setup if you are trying:
Origin Node (Master/Primary are called as Origin)                     Subscriber Node (Slave/Secondary are called as Subscriber)
-------------------------------------------------                     ----------------------------------------------------------
Host IP     : 192.168.22.130                                          192.168.22.131
OS Version  : RHEL 6.5 64 bit                                         RHEL 6.5 64 bit 
PG Version  : 8.4.22 (5432 Port)                                      9.3.5 (5432 Port)
Slony Vers. : 2.2.2                                                   2.2.2
PG Binaries : /usr/local/pg84/bin                                     /opt/PostgreSQL/9.3/
Database    : masterdb                                                slavedb 
PK Table    : foo(id int primary key, name char(20), image bytea)     ...restore PK tables structure from Origin... 
For simple understanding and easy implementation, I have divided demo in three sections

1. Compiling for Slony-I binaries against PostgreSQL versions
2. Creating Replication Scripts and executing
3. Testing Switchover/Switchback.

1. Compiling for Slony-I binaries against PostgreSQL version
Download Slony-I sources from here, and perform source installation against PostgreSQL binaries on Origin and Subscriber nodes.
On Origin Node:
# tar -xvf slony1-2.2.2.tar.bz2
# cd slony1-2.2.2
./configure --with-pgbindir=/usr/local/pg84/bin 
            --with-pglibdir=/usr/local/pg84/lib 
            --with-pgincludedir=/usr/local/pg84/include 
            --with-pgpkglibdir=/usr/local/pg84/lib/postgresql 
            --with-pgincludeserverdir=/usr/local/pg84/include/postgresql/
make 
make install

On Subscriber Node: (assuming PG 9.3.5 installed)
# tar -xvf slony1-2.2.2.tar.bz2
# cd slony1-2.2.2
./configure --with-pgconfigdir=/opt/PostgreSQL/9.3/bin 
            --with-pgbindir=/opt/PostgreSQL/9.3/bin 
            --with-pglibdir=/opt/PostgreSQL/9.3/lib 
            --with-pgincludedir=/opt/PostgreSQL/9.3/include 
            --with-pgpkglibdir=/opt/PostgreSQL/9.3/lib/postgresql 
            --with-pgincludeserverdir=/opt/PostgreSQL/9.3/include/postgresql/server/ 
            --with-pgsharedir=/opt/PostgreSQL/9.3/share
make 
make install
2. Creating Replication Scripts and executing
To setup replication, we need create few scripts that take care of replication including switchover/switchback.

1. initialize.slonik - This script holds the Origin/Subscriber nodes connection information.
2. create_set.slonik - This script holds all the Origin PK Tables that replicate to Subscriber Node.
3. subscribe_set.slonik - This script starts replicating sets data to Subscriber Node.
4. switchover.slonik - This script helps to move control from Origin to Subscriber.
5. switchback.slonik - This script helps to fallback control from Subscriber to Origin.

Finally, two more startup scripts "start_OriginNode.sh" and "start_SubscriberNode.sh" that starts slon processes according to the binaries compiled on Origin/Subscriber Nodes.

Download all scripts from here.

Here's the sample data on Origin Node(8.4.22) in Foo Table with a column of bytea datatype, that we will replicate it to Subscriber Node(9.3.5) with the help of scripts created.
masterdb=# select * from foo;
 id |         name         | image
----+----------------------+-------
  1 | Raghav               | test1
  2 | Rao                  | test2
  3 | Rags                 | test3
(3 rows)
Lets call the scripts one by one to setup replication. REMEMBER ALL SLONIK SCRIPT SHOULD BE EXECUTED ON ORIGIN NODE ONLY, EXCEPT "start_OriginNode.sh" AND "start_SubscriberNode.sh" THAT SHOULD BE EXECUTED INDIVIDUALLY.
-bash-4.1$ slonik initalize.slonik
-bash-4.1$ slonik create_set.slonik
create_set.slonik:13: Set 1 ...created
create_set.slonik:16: PKey table *** public.foo *** added.
-bash-4.1$ sh start_OriginNode.sh      
-bash-4.1$ sh start_SubscriberNode.sh  //ON SUBSCRIBER NODE   
-bash-4.1$ slonik subscribe_set.slonik
After successful execution of above script, you can notice the data on Origin(masterdb) has replicated to Subscriber(slavedb). Also not allowing any DML operation on Subscriber node:
slavedb=# select * from foo;
 id |         name         | image
----+----------------------+-------
  1 | Raghav               | test1
  2 | Rao                  | test2
  3 | Rags                 | test3
(3 rows)

slavedb=# insert into foo values (4,'PG-Experts','Image2');
ERROR:  Slony-I: Table foo is replicated and cannot be modified on a subscriber node - role=0
Cool... We have moved data to newer version of PostgreSQL 9.3.5. At this stage if you feel all data have replicated to Subscriber Node, then you can do switchover.

3. Testing Switchover/Switchback.

Let's switchover to latest version with the script and try inserting data on Subscriber/Origin Nodes.
-bash-4.1$ slonik switchover.slonik
switchover.slonik:8: Set 1 has been moved from Node 1 to Node 2

slavedb=# insert into foo values (4,'PG-Experts','Image2');
INSERT 0 1

masterdb=# select * from foo ;
 id |         name         | image
----+----------------------+-------
  1 | Raghav               | test1
  2 | Rao                  | test2
  3 | Rags                 | test3
  4 | PG-Experts           | Image2
(4 rows)

masterdb=# insert into foo values (5,'PG-Experts','Image3');
ERROR:  Slony-I: Table foo is replicated and cannot be modified on a subscriber node - role=0
Perfect... This is what we are looking, now slavedb(Subscriber Node) running PG 9.3.5 version accepting data and masterdb(Origin Node) receiving the slavedb data. Also its rejecting DMLs executed on masterdb.

Slony-I Logs shows the origin/subscriber node id movements at the time of Switchover:
2014-12-12 04:55:06 PST CONFIG moveSet: set_id=1 old_origin=1 new_origin=2
2014-12-12 04:55:06 PST CONFIG storeListen: li_origin=1 li_receiver=2 li_provider=1
2014-12-12 04:55:06 PST CONFIG remoteWorkerThread_1: update provider configuration
2014-12-12 04:55:06 PST CONFIG remoteWorkerThread_1: helper thread for provider 1 terminated
2014-12-12 04:55:06 PST CONFIG remoteWorkerThread_1: disconnecting from data provider 1
...
...
2014-12-12 04:55:11 PST INFO   start processing ACCEPT_SET
2014-12-12 04:55:11 PST INFO   ACCEPT: set=1
2014-12-12 04:55:11 PST INFO   ACCEPT: old origin=1
2014-12-12 04:55:11 PST INFO   ACCEPT: new origin=2
2014-12-12 04:55:11 PST INFO   ACCEPT: move set seq=5000006393
2014-12-12 04:55:11 PST INFO   got parms ACCEPT_SET
If you encounter any issues at this stage, you can switchback to older version. After switchback you can continue with Older version until your application or other issues fixed. This is the perfect rollback plan without wasting much of time in case of issues after switchover..
-bash-4.1$ slonik switchback.slonik
switchback.slonik:8: Set 1 has been moved from Node 2 to Node 1

slavedb=# insert into foo values (5,'PG-Experts','Image3');
ERROR:  Slony-I: Table foo is replicated and cannot be modified on a subscriber node - role=0

masterdb=# insert into foo values (5,'PG-Experts','Image3');
INSERT 0 1

slavedb=# select * from foo ;
 id |         name         | image
----+----------------------+-------
  1 | Raghav               | test1
  2 | Rao                  | test2
  3 | Rags                 | test3
  4 | PG-Experts           | Image2
  5 | PG-Experts           | Image3
(5 rows)
Very Nice...!!! Is this not the exact rollback with minimum downtime ? Yes, its a perfect switching between nodes without missing a transaction.

Logs showing the switchback from Subscriber to Origin Node:
2014-12-12 04:58:45 PST CONFIG moveSet: set_id=1 old_origin=2 new_origin=1
2014-12-12 04:58:45 PST CONFIG storeListen: li_origin=2 li_receiver=1 li_provider=2
2014-12-12 04:58:45 PST CONFIG remoteWorkerThread_2: update provider configuration
2014-12-12 04:58:45 PST CONFIG remoteWorkerThread_2: helper thread for provider 2 terminated
2014-12-12 04:58:45 PST CONFIG remoteWorkerThread_2: disconnecting from data provider 2
2014-12-12 04:58:46 PST CONFIG storeListen: li_origin=2 li_receiver=1 li_provider=2
...
...
2014-12-12 04:58:47 PST INFO   start processing ACCEPT_SET
2014-12-12 04:58:47 PST INFO   ACCEPT: set=1
2014-12-12 04:58:47 PST INFO   ACCEPT: old origin=2
2014-12-12 04:58:47 PST INFO   ACCEPT: new origin=1
2014-12-12 04:58:47 PST INFO   ACCEPT: move set seq=5000006403
2014-12-12 04:58:47 PST INFO   got parms ACCEPT_SET
2014-12-12 04:58:48 PST CONFIG moveSet: set_id=1 old_origin=2 new_origin=1
By this time you might have noticed, none of the transactions are lost during switching operation  between PostgreSQL versions. Only downtime might be your application to start/stop for connecting  to Origin and Subscriber nodes, but whereas Origin/Subscriber nodes are never taken down they are just up and running.

Remember, the method shown here is not only useful for upgrades however its the same method in Slony-I  for moving between Nodes.

Thank you for your patience :). Hope this post helps you to upgrade PostgreSQL with minimum downtime using Slony-I including proper rollback plan.

--Raghav
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License