Tuesday, January 13, 2015

Configuring Automatic failover using Replication Manager 2.0 on PostgreSQL 9.3.5

In PostgreSQL high availability(Streaming Replication/Hot Standby), one of the thing require human interference and has no automation, that is in the event of master database crash; initiating failover procedure(trigger file creation or pg_ctl promote command) on the standby. Because in core of PostgreSQL there's no such built-in functionality to identify a failure of master and notify the standby. Thus, we require some automated tools to take over manual failover work or we have to dive into scripting land for writing our own script to do it.

Today, we have very good external tools to handle automatic failover like Replication Manager(repmgr),  EDB Failover Manager(EFM),  pgHA and HandyRep. Thanks to all for filling the gap of automatic failover in PostgreSQL.

In this post, am demonstrating Replication Manager on single node(localhost) on RHEL 6.5 - PostgreSQL 9.3.5. In order to achieve an easy and good understanding of concept I have compiled repmgr with EnterpriseDB One Click Installer(a pre-build binary package) instead of PG source.

To compile repmgr, we need to install few mandatory dependency packages gcc, postgresql-devel, libxslt-devel, pam-devel, libopenssl-devel, krb5-devel and libedit-devel by using yum or rpm. After installing dependencies, download repmgr 2.0 from here and set pg_config in your path and start compiling.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ type pg_config
pg_config is hashed (/opt/PostgreSQL/9.3/bin/pg_config)

export PATH=/opt/PostgreSQL/9.3/bin:$PATH
export LD_LIBRARY_PATH=/opt/PostgreSQL/9.3/lib:$LD_LIBRARY_PATH

tar xvzf repmgr-2.0.tar.gz
cd repmgr
make USE_PGXS=1
make USE_PGXS=1 install
Mostly, repmgr compiles smoothly without any hiccups if we have installed all dependency packages, since am compiling against PG pre-build binaries, there may be diverse variants of libraries came with pre-build and rpm which might throw some compilation errors. Like one you see here:
/lib64/libldap_r-2.4.so.2: undefined reference to `ber_sockbuf_io_udp'
collect2: ld returned 1 exit status
make: *** [repmgrd] Error 1
To fix, find the checking library in /lib64/libldap_r-2.4.so.2.
[root@localhost repmgr-2.0]# cd /lib64/
[root@localhost lib64]# ls -l libldap*
lrwxrwxrwx. 1 root root     20 Dec  8 09:23 libldap-2.4.so.2 -> libldap-2.4.so.2.5.6
-rwxr-xr-x. 1 root root 317200 Apr 29  2013 libldap-2.4.so.2.5.6
lrwxrwxrwx. 1 root root     22 Dec  8 09:23 libldap_r-2.4.so.2 -> libldap_r-2.4.so.2.5.6
-rwxr-xr-x. 1 root root 335264 Apr 29  2013 libldap_r-2.4.so.2.5.6
Ok, there are two copies, retain one and unlink other.
[root@localhost lib64]# unlink libldap_r-2.4.so.2
Now clean the previous compilation by command "make USE_PGXS=1 clean" and proceed from step 1 and it will compile without any issues. Similarly, you need to fix for other library related errors. After installation you can find binaries, libraries and .SQL files related to repmgr in PostgreSQL locations.
[postgres@localhost:/opt/PostgreSQL/9.3/bin~]$ ls repmgr*     (Two utility commands)
repmgr  repmgrd

[postgres@localhost:/opt/PostgreSQL/9.3/lib/postgresql~]$ ls rep*  
repmgr_funcs.so

[postgres@localhost:/opt/PostgreSQL/9.3/share/postgresql/contrib~]$ ls
repmgr_funcs.sql  repmgr.sql  uninstall_repmgr_funcs.sql  uninstall_repmgr.sql
We are all set to setup automatic failover with a super-simple-toy Replication Manager. As a first step we need to have streaming replication(Refer to wiki) configured which I have done already on my localhost between two port 5432 (Master) and 5433 (Standby) lets use them. You can also try building standby using repmgr STANDBY CLONE command. Refer to repmgr documentation for more details.

Step 1. Enable repmgr libraries on both PostgreSQL instances that required for its backend functions.
Master Data Directory : /opt/PostgreSQL/9.3/data
Standby Data Directory: /opt/PostgreSQL/9.3/data_slave

Edit $PGDATA/postgresql.conf 
shared_preload_libraries = 'repmgr_funcs'

[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data start

[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -c "show shared_preload_libraries;"
 shared_preload_libraries
--------------------------
 repmgr_funcs
(1 row)
Step 2. Repmgr need repmgr.conf file for each node, since we are working on localhost we need to keep each nodes repmgr.conf in seperate directory.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ pwd
/opt/PostgreSQL/9.3
[postgres@localhost:/opt/PostgreSQL/9.3~]$ mkdir -p repmgr/master repmgr/standby
Step 3. Create repmgr.conf file for Master(5432) and Standby(5433) in the directories we created in Step 2.
[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/master~]$ pwd
/opt/PostgreSQL/9.3/repmgr/master
[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/master~]$ more repmgr.conf
cluster=test
node=1
node_name=master
conninfo='host=127.0.0.1 port=5432 dbname=postgres'
pg_bindir=/opt/PostgreSQL/9.3/bin
master_response_timeout=60 
reconnect_attempts=6
reconnect_interval=10
failover=automatic
promote_command='/opt/PostgreSQL/9.3/repmgr/auto_failover.sh'


[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/slave~]$ pwd
/opt/PostgreSQL/9.3/repmgr/slave
[postgres@localhost:/opt/PostgreSQL/9.3/repmgr/slave~]$ more repmgr.conf
cluster=test
node=2
node_name=slave
conninfo='host=127.0.0.1 port=5433 dbname=postgres'
pg_bindir=/opt/PostgreSQL/9.3/bin
master_response_timeout=60
reconnect_attempts=6
reconnect_interval=10
failover=automatic
loglevel=DEBUG
promote_command='/opt/PostgreSQL/9.3/repmgr/auto_failover.sh'
What mainly you have to observe in repmgr.conf is "master_reponse_timeout" which's total wait duration in seconds before declaring master has disappeared. In that duration 6 reconnect attempts made with 10 seconds of interval. After no response from master in "master_response_timeout" duration automatic failover takes place by promote_command script. The script consist of Standby promotion steps, which I have created one for this setup shared below.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ more repmgr/auto_failover.sh
#!/bin/bash
echo "Promoting Standby at `date '+%Y-%m-%d %H:%M:%S'`" >>/tmp/repsetup.log
/opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave promote >>/tmp/repsetup.log
Step 4. Register Master and Standby node with repmgr using "repmgr" utility.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/master/repmgr.conf --verbose master register >/tmp/repsetup.log 2>&1
Master register logs (/tmp/repsetup.log):
[2015-01-12 01:28:55] [INFO] repmgr connecting to master database
[2015-01-12 01:28:55] [INFO] repmgr connected to master, checking its state
[2015-01-12 01:28:55] [INFO] master register: creating database objects inside the repmgr_test schema
[2015-01-12 01:28:55] [DEBUG] master register: CREATE SCHEMA repmgr_test
[2015-01-12 01:28:55] [DEBUG] master register: CREATE TABLE repmgr_test.repl_nodes (          [2015-01-12 01:28:55] [DEBUG] master register: CREATE TABLE repmgr_test.repl_monitor (   [2015-01-12 01:28:55] [DEBUG] master register: CREATE VIEW repmgr_test.repl_status AS  [2015-01-12 01:28:55] [DEBUG] master register: CREATE INDEX idx_repl_status_sort     ON repmgr_test.repl_monitor (last_monitor_time, standby_node)
[2015-01-12 01:28:55] [DEBUG] master register: INSERT INTO repmgr_test.repl_nodes (id, cluster, name, conninfo, priority) VALUES (1, 'test', 'master', 'host=1
27.0.0.1 port=5432 dbname=postgres', 0)
[2015-01-12 01:28:55] [NOTICE] Master node correctly registered for cluster test with id 1 (conninfo: host=127.0.0.1 port=5432 dbname=postgres)
Opening configuration file: repmgr/master/repmgr.conf
Standby:
[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/slave/repmgr.conf --verbose standby register >>/tmp/repsetup.log 2>&1
Standby register logs (/tmp/repsetup.log):
[2015-01-12 01:30:37] [INFO] repmgr connecting to standby database
[2015-01-12 01:30:37] [INFO] repmgr connected to standby, checking its state
[2015-01-12 01:30:37] [DEBUG] standby register: SELECT 1 FROM pg_namespace WHERE nspname = 'repmgr_test'
[2015-01-12 01:30:37] [INFO] repmgr connecting to master database
[2015-01-12 01:30:37] [INFO] finding node list for cluster 'test'
[2015-01-12 01:30:37] [INFO] checking role of cluster node 'host=127.0.0.1 port=5432 dbname=postgres'
[2015-01-12 01:30:37] [INFO] repmgr connected to master, checking its state
[2015-01-12 01:30:37] [INFO] repmgr registering the standby
[2015-01-12 01:30:37] [DEBUG] standby register: INSERT INTO repmgr_test.repl_nodes(id, cluster, name, conninfo, priority) VALUES (2, 'test', 'slave', 'host=12
7.0.0.1 port=5433 dbname=postgres', 0)
[2015-01-12 01:30:37] [INFO] repmgr registering the standby complete
[2015-01-12 01:30:37] [NOTICE] Standby node correctly registered for cluster test with id 2 (conninfo: host=127.0.0.1 port=5433 dbname=postgres)
Opening configuration file: repmgr/slave/repmgr.conf
By looking to the logs you can easily notice, repmgr creating its own schema in database with "repmgr_$CLUSTER" name and some tables/view/functions in it. And it create one row regarding replication lag in repl_monitor table. Repmgr has a utility called "repmgrd" to monitor Master availability running daemon process from Standby node. Its also a management and monitoring system daemon that watches the cluster status and can trigger standby promotion. We need to start "repmgrd" daemon process after registering master/standby nodes with repmgr.

Step 5. Lets start the repmgr daemon process to watch master, in our case we are not running standby on separate node hence we need to start the daemon with repmgr/standby/repmgr.conf file.
repmgrd -f repmgr/slave/repmgr.conf --verbose --monitoring-history >>/tmp/repsetup.log 2>&1 &

Logs (/tmp/repsetup.log)

[2015-01-12 01:42:13] [INFO] repmgrd Connecting to database 'host=127.0.0.1 port=5433 dbname=postgres'
[2015-01-12 01:42:13] [INFO] repmgrd Connected to database, checking its state
[2015-01-12 01:42:13] [INFO] repmgrd Connecting to primary for cluster 'test'
[2015-01-12 01:42:13] [INFO] finding node list for cluster 'test'
[2015-01-12 01:42:13] [INFO] checking role of cluster node 'host=127.0.0.1 port=5432 dbname=postgres'
[2015-01-12 01:42:13] [INFO] repmgrd Checking cluster configuration with schema 'repmgr_test'
[2015-01-12 01:42:13] [INFO] repmgrd Checking node 2 in cluster 'test'
[2015-01-12 01:42:13] [INFO] Reloading configuration file and updating repmgr tables
[2015-01-12 01:42:13] [INFO] repmgrd Starting continuous standby node monitoring
[2015-01-12 01:42:14] [DEBUG] standby_monitor: INSERT INTO repmgr_test.repl_monitor VALUES(1, 2, '2015-01-12 09:42:14.457287+00'::timestamp with time zone,  '2015-01-12 09:42:13.950172+00'::timestamp with time zone, '2/C84DAB08', '2/C84DAB08',  0, 0)
Step 6. Monitor nodes registered with repmgr and running daemon process.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -d postgres -xc "SELECT * FROM repmgr_test.repl_status"
-[ RECORD 1 ]-------------+------------------------------
primary_node              | 1
standby_node              | 2
standby_name              | slave
last_monitor_time         | 2015-01-12 09:42:29.514056+00
last_wal_primary_location | 2/C84DB7A8
last_wal_standby_location | 2/C84DB7A8
replication_lag           | 0 bytes
replication_time_lag      | 00:04:01.960772
apply_lag                 | 0 bytes
communication_time_lag    | 00:03:59.45349

[postgres@localhost:/opt/PostgreSQL/9.3~]$ repmgr -f repmgr/master/repmgr.conf cluster show
[2015-01-12 01:47:00] [INFO] repmgr connecting to database
Role      | Connection String
* master  | host=127.0.0.1 port=5432 dbname=postgres
  standby | host=127.0.0.1 port=5433 dbname=postgres
Step 7. Lets simulate failover scenario by taking master down. I am doing some stupid way by killing postmaster.pid. Please don't do the same on production if you want to stick for long time in the same company :).
[postgres@localhost:/opt/PostgreSQL/9.3~]$ kill `head -n1 data/postmaster.pid`
or, you can take master down safely. 
[postgres@localhost:/opt/PostgreSQL/9.3~]$ pg_ctl -D /opt/PostgreSQL/9.3/data stop -mf
Now check the logs, how repmgr has promted the standby:
[2015-01-12 02:14:11] [WARNING] Can't stop current query: PQcancel() -- connect() failed: Connection refused
[2015-01-12 02:14:11] [WARNING] repmgrd: Connection to master has been lost, trying to recover... 10 seconds before failover decision
...
[2015-01-12 02:14:21] [ERROR] repmgrd: We couldn't reconnect for long enough, exiting...
[2015-01-12 02:14:21] [DEBUG] repmgrd: there are 2 nodes registered
[2015-01-12 02:14:21] [DEBUG] repmgrd: node=1 conninfo="host=127.0.0.1 port=5432 dbname=postgres" witness=false
[2015-01-12 02:14:21] [ERROR] Connection to database failed: could not connect to server: Connection refused
        Is the server running on host "127.0.0.1" and accepting
        TCP/IP connections on port 5432?
[2015-01-12 02:14:21] [DEBUG] repmgrd: node=2 conninfo="host=127.0.0.1 port=5433 dbname=postgres" witness=false
[2015-01-12 02:14:21] [DEBUG] Total nodes counted: registered=2, visible=1
[2015-01-12 02:14:21] [DEBUG] XLog position of node 2: log id=2 (2), offset=3360733048 (C850B778)
[2015-01-12 02:14:21] [DEBUG] Last XLog position of node 2: log id=2 (2), offset=3360733048 (C850B778)
[2015-01-12 02:14:26] [INFO] repmgrd: This node is the best candidate to be the new primary, promoting...
[2015-01-12 02:14:26] [DEBUG] promote command is: "/opt/PostgreSQL/9.3/repmgr/auto_failover.sh"
Promoting Standby at 2015-01-12 02:14:26
server promoting
[2015-01-12 02:14:29] [INFO] repmgrd Checking cluster configuration with schema 'repmgr_test'
[2015-01-12 02:14:29] [INFO] repmgrd Checking node 2 in cluster 'test'
[2015-01-12 02:14:29] [INFO] Reloading configuration file and updating repmgr tables
[2015-01-12 02:14:29] [INFO] repmgrd Starting continuous primary connection check
Perfect, repmgr daemon recognised master failure and before promoting standby by "auto_failover.sh" script it has properly verified the WAL's location as well. Lets verify whether Standby had promotion or not ?
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5433 -c "select pg_is_in_recovery();"
 pg_is_in_recovery
-------------------
 f
(1 row)
Incredible, without any human intervention manual failover procedure taken care by repmgr. You can rebuild the dead master as standby either by repmgr itself or you can follow the switchback procedure shown in my previous post. Same way you can setup for two nodes, by executing standby steps on standby node instead of localhost.

Thanks for reading.
Raghav

Monday, December 29, 2014

Implementing Switchover/Switchback in PostgreSQL 9.3.

This post educates sophisticated DBA's on how to setup graceful Switchover and Switchback environment in PostgreSQL high availability. Firstly, thanks to patch authors Heikki and Fujii for making Switchover/Switchback easier in PostgreSQL 9.3.(Pardom me if I missed other names).

Let me attempt to illustrate it in short prior to these patches, all of you know Standby's are critical components in achieving fast and safe disaster recovery. In PostgreSQL, recovery concept majorly deals with timelines to identify a series of WAL segments before and after the PITR or promotion of Standby to avoid overlapping of WAL segments. Timeline ID are associated with WAL segment file names(Eg:- In $PGDATA/pg_xlog/0000000C000000020000009E segment "0000000C" is timeline ID). In Streaming Replication both Primary and Slave will follow the same timeline ID, however when Standby gets promotion as new master by Switchover it bumps the timeline ID and old Primary refuses to restart as Standby due to timeline ID difference and throw error message as:
FATAL:  requested timeline 10 is not a child of this server's history
DETAIL:  Latest checkpoint is at 2/9A000028 on timeline 9, but in the history of the requested timeline, the server forked off from that timeline at 2/99017E68.
Thus, a new Standby has to be built from scratch, if the database size is huge then a longer time to rebuild and for this period newly promoted Primary will be running without Standby. There's also other issue like, when Switchover happens Primary does clean shutdown, Walsender process sends all outstanding WAL records to the standby but it doesn't wait for them to be replicated before it exits. Walreceiver fails to apply those outstanding WAL records as it detects closure of connection and exits.

Today, with two key software updates in PostgreSQL 9.3, both of the issues addressed very well by authors and now Streaming Replication Standby's follow a timeline switch consistently. We can now seamlessly and painlessly switch the duties between Primary and Standby by just restarting and majorly reducing rebuild time of Standby.

Note: Switchover/Switchback not possible if WAL Archives are not accessible to both servers and in Switchover process Primary database must do clean shutdown(normal or fast mode).

To demo, lets start with setup of Streaming Replication(wiki to setup SR) which I have configured in my local VM between two clusters (5432 as Primary and 5433 as Standby) sharing a common WAL archives location, because both clusters should have complete access of sequence of WAL archives. Look at the snapshot shared below with setup details and current timeline ID for better understanding of concept.

At this stage everyone must have a solid understanding that Switchover and Switchback are planned activities.  Now SR setup in place we can exchange the duties of primary and standby as shown below:

Switchover steps:

Step 1. Do clean shutdown of Primary[5432] (-m fast or smart)
[postgres@localhost:/~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data stop -mf
waiting for server to shut down.... done
server stopped
Step 2. Check for sync status and recovery status of Standby[5433] before promoting it:
[postgres@localhost:/opt/PostgreSQL/9.3~]$  psql -p 5433 -c 'select pg_last_xlog_receive_location() "receive_location",
pg_last_xlog_replay_location() "replay_location",
pg_is_in_recovery() "recovery_status";'
 receive_location | replay_location | recovery_status
------------------+-----------------+-----------------
 2/9F000A20       | 2/9F000A20      | t
(1 row)
Standby in complete sync. At this stage we are safe to promote it as Primary.
Step 3. Open the Standby as new Primary by pg_ctl promote or creating a trigger file.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ grep trigger_file data_slave/recovery.conf
trigger_file = '/tmp/primary_down.txt'
[postgres@localhost:/opt/PostgreSQL/9.3~]$ touch /tmp/primary_down.txt

[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5433 -c "select pg_is_in_recovery();"
 pg_is_in_recovery
-------------------
 f
(1 row)

In Logs:  
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  trigger file found: /tmp/primary_down.txt
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  redo done at 2/A0000028
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  selected new timeline ID: 14
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  restored log file "0000000D.history" from archive
2014-12-29 00:16:04 PST-26344-- [host=] LOG:  archive recovery complete
2014-12-29 00:16:04 PST-26342-- [host=] LOG:  database system is ready to accept connections
2014-12-29 00:16:04 PST-31874-- [host=] LOG:  autovacuum launcher started
Standby has been promoted as master and a new timeline followed which you can notice in logs.
Step 4. Restart old Primary as standby and allow to follow the new timeline by passing "recovery_target_timline='latest'" in $PGDATA/recovery.conf file.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ cat data/recovery.conf
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=localhost port=5433 user=postgres'
restore_command = 'cp /opt/PostgreSQL/9.3/archives93/%f %p'
trigger_file = '/tmp/primary_131_down.txt'
[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data start
server starting
If you go through recovery.conf its very clear that old Primary trying to connect to 5433 port as new Standby pointing to common WAL Archives location and started.
In Logs:
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  database system was shut down at 2014-12-29 00:12:23 PST
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  restored log file "0000000E.history" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  entering standby mode
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  restored log file "0000000D00000002000000A0" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  restored log file "0000000D.history" from archive
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  consistent recovery state reached at 2/A0000090
2014-12-29 00:21:17 PST-32315-- [host=] LOG:  record with zero length at 2/A0000090
2014-12-29 00:21:17 PST-32310-- [host=] LOG:  database system is ready to accept read only connections
2014-12-29 00:21:17 PST-32325-- [host=] LOG:  started streaming WAL from primary at 2/A0000000 on timeline 14
Step 5. Verify the new Standby status.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ psql -p 5432 -c "select pg_is_in_recovery();"
 pg_is_in_recovery
-------------------
 t
(1 row)
Cool, without any re-setup we have brought back old Primary as new Standby.

Switchback steps:

Step 1. Do clean shutdown of new Primary [5433]:
[postgres@localhost:/opt/~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave stop -mf
waiting for server to shut down.... done
server stopped
Step 2. Check for sync status of new Standby [5432] before promoting.
Step 3. Open the new Standby [5432] as Primary by creating trigger file or pg_ctl promote.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ touch /tmp/primary_131_down.txt
Step 4. Restart stopped new Primary [5433] as new Standby.
[postgres@localhost:/opt/PostgreSQL/9.3~]$ more data_slave/recovery.conf
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=localhost port=5432 user=postgres'
restore_command = 'cp /opt/PostgreSQL/9.3/archives93/%f %p'
trigger_file = '/tmp/primary_down.txt'

[postgres@localhost:/opt/PostgreSQL/9.3~]$ /opt/PostgreSQL/9.3/bin/pg_ctl -D /opt/PostgreSQL/9.3/data_slave start
server starting
You can verify the logs of new Standby.
In logs:
[postgres@localhost:/opt/PostgreSQL/9.3/data_slave/pg_log~]$ more postgresql-2014-12-29_003655.log
2014-12-29 00:36:55 PST-919-- [host=] LOG:  database system was shut down at 2014-12-29 00:34:01 PST
2014-12-29 00:36:55 PST-919-- [host=] LOG:  restored log file "0000000F.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG:  entering standby mode
2014-12-29 00:36:55 PST-919-- [host=] LOG:  restored log file "0000000F.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG:  restored log file "0000000E00000002000000A1" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG:  restored log file "0000000E.history" from archive
2014-12-29 00:36:55 PST-919-- [host=] LOG:  consistent recovery state reached at 2/A1000090
2014-12-29 00:36:55 PST-919-- [host=] LOG:  record with zero length at 2/A1000090
2014-12-29 00:36:55 PST-914-- [host=] LOG:  database system is ready to accept read only connections
2014-12-29 00:36:55 PST-929-- [host=] LOG:  started streaming WAL from primary at 2/A1000000 on timeline 15
2014-12-29 00:36:56 PST-919-- [host=] LOG:  redo starts at 2/A1000090
Very nice, without much time we have switched the duties of Primary and Standby servers. You can even notice the increment of the timeline IDs from logs for each promotion.

Like others all my posts are part of knowledge sharing, any comments or corrections are most welcome. :)

--Raghav

Saturday, December 13, 2014

Switchover/Switchback in Slony-I while upgrading PostgreSQL major versions 8.4.x/9.3.x

Every new release of PostgreSQL comes with a packed of exciting features. To benefit new features, database server should be upgraded. Choosing traditional upgrade paths like pg_dump/pg_restore or pg_upgrade requires a significant downtime of application. Today, if you are looking for minimum downtime upgrade path among major PostgreSQL versions with perfect rollback plan, then it will be accomplished by asynchronous Slony-I replication. Since Slony-I (know more about it here) has the capability to replicate between different PostgreSQL versions,OS and bit architectures easily, so upgrades are doable without requiring a substantial downtime. In addition, it has a consistent switchover and switchback functionality in its design.

IMO, while doing major version upgrades there should be a proper fallback plan because just in case application turn out to buggy or fail to perform well on upgraded version, then we should be able to rollback to older version immediately. Slony-I provides such functionality in the way of switchback. This post demonstrates, minimum downtime upgradation including switchover/switchback steps.

Before going to demo, one important step to be noted, earlier to PG 9.0.x version bytea datatype columns use to store data in ESCAPE format and later version its in HEX format. While performing switchback (newer version to older version), this kind of bytea format differences are not support by Slony-I hence ESCAPE format should be maintained through out upgrade duration, else you may encounter with an error:
ERROR  remoteWorkerThread_1_1: error at end of COPY IN: ERROR:  invalid input syntax for type bytea
CONTEXT:  COPY sl_log_1, line 1: "1     991380  1       100001  public  foo I       0       {id,500003,name,"A         ",b,"\\\\x41"}"
ERROR  remoteWorkerThread_1: SYNC aborted
To fix, no changes required on PG 8.4.x but on PG 9.3.5 bytea_output parameter should be set from HEX to ESCAPE as shown. We can set it at cluster-level ($PGDATA/postgresql.conf) or user-level (ALTER TABLE...SET), I have preferred to go with user-level changes.
slavedb=# alter user postgres set bytea_output to escape;
ALTER ROLE
Lets proceed with upgrade steps. Below are my two versions server details used in this demo, change it accordingly as per your server setup if you are trying:
Origin Node (Master/Primary are called as Origin)                     Subscriber Node (Slave/Secondary are called as Subscriber)
-------------------------------------------------                     ----------------------------------------------------------
Host IP     : 192.168.22.130                                          192.168.22.131
OS Version  : RHEL 6.5 64 bit                                         RHEL 6.5 64 bit 
PG Version  : 8.4.22 (5432 Port)                                      9.3.5 (5432 Port)
Slony Vers. : 2.2.2                                                   2.2.2
PG Binaries : /usr/local/pg84/bin                                     /opt/PostgreSQL/9.3/
Database    : masterdb                                                slavedb 
PK Table    : foo(id int primary key, name char(20), image bytea)     ...restore PK tables structure from Origin... 
For simple understanding and easy implementation, I have divided demo in three sections

1. Compiling for Slony-I binaries against PostgreSQL versions
2. Creating Replication Scripts and executing
3. Testing Switchover/Switchback.

1. Compiling for Slony-I binaries against PostgreSQL version
Download Slony-I sources from here, and perform source installation against PostgreSQL binaries on Origin and Subscriber nodes.
On Origin Node:
# tar -xvf slony1-2.2.2.tar.bz2
# cd slony1-2.2.2
./configure --with-pgbindir=/usr/local/pg84/bin 
            --with-pglibdir=/usr/local/pg84/lib 
            --with-pgincludedir=/usr/local/pg84/include 
            --with-pgpkglibdir=/usr/local/pg84/lib/postgresql 
            --with-pgincludeserverdir=/usr/local/pg84/include/postgresql/
make 
make install

On Subscriber Node: (assuming PG 9.3.5 installed)
# tar -xvf slony1-2.2.2.tar.bz2
# cd slony1-2.2.2
./configure --with-pgconfigdir=/opt/PostgreSQL/9.3/bin 
            --with-pgbindir=/opt/PostgreSQL/9.3/bin 
            --with-pglibdir=/opt/PostgreSQL/9.3/lib 
            --with-pgincludedir=/opt/PostgreSQL/9.3/include 
            --with-pgpkglibdir=/opt/PostgreSQL/9.3/lib/postgresql 
            --with-pgincludeserverdir=/opt/PostgreSQL/9.3/include/postgresql/server/ 
            --with-pgsharedir=/opt/PostgreSQL/9.3/share
make 
make install
2. Creating Replication Scripts and executing
To setup replication, we need create few scripts that take care of replication including switchover/switchback.

1. initialize.slonik - This script holds the Origin/Subscriber nodes connection information.
2. create_set.slonik - This script holds all the Origin PK Tables that replicate to Subscriber Node.
3. subscribe_set.slonik - This script starts replicating sets data to Subscriber Node.
4. switchover.slonik - This script helps to move control from Origin to Subscriber.
5. switchback.slonik - This script helps to fallback control from Subscriber to Origin.

Finally, two more startup scripts "start_OriginNode.sh" and "start_SubscriberNode.sh" that starts slon processes according to the binaries compiled on Origin/Subscriber Nodes.

Download all scripts from here.

Here's the sample data on Origin Node(8.4.22) in Foo Table with a column of bytea datatype, that we will replicate it to Subscriber Node(9.3.5) with the help of scripts created.
masterdb=# select * from foo;
 id |         name         | image
----+----------------------+-------
  1 | Raghav               | test1
  2 | Rao                  | test2
  3 | Rags                 | test3
(3 rows)
Lets call the scripts one by one to setup replication. REMEMBER ALL SLONIK SCRIPT SHOULD BE EXECUTED ON ORIGIN NODE ONLY, EXCEPT "start_OriginNode.sh" AND "start_SubscriberNode.sh" THAT SHOULD BE EXECUTED INDIVIDUALLY.
-bash-4.1$ slonik initalize.slonik
-bash-4.1$ slonik create_set.slonik
create_set.slonik:13: Set 1 ...created
create_set.slonik:16: PKey table *** public.foo *** added.
-bash-4.1$ sh start_OriginNode.sh      
-bash-4.1$ sh start_SubscriberNode.sh  //ON SUBSCRIBER NODE   
-bash-4.1$ slonik subscribe_set.slonik
After successful execution of above script, you can notice the data on Origin(masterdb) has replicated to Subscriber(slavedb). Also not allowing any DML operation on Subscriber node:
slavedb=# select * from foo;
 id |         name         | image
----+----------------------+-------
  1 | Raghav               | test1
  2 | Rao                  | test2
  3 | Rags                 | test3
(3 rows)

slavedb=# insert into foo values (4,'PG-Experts','Image2');
ERROR:  Slony-I: Table foo is replicated and cannot be modified on a subscriber node - role=0
Cool... We have moved data to newer version of PostgreSQL 9.3.5. At this stage if you feel all data have replicated to Subscriber Node, then you can do switchover.

3. Testing Switchover/Switchback.

Let's switchover to latest version with the script and try inserting data on Subscriber/Origin Nodes.
-bash-4.1$ slonik switchover.slonik
switchover.slonik:8: Set 1 has been moved from Node 1 to Node 2

slavedb=# insert into foo values (4,'PG-Experts','Image2');
INSERT 0 1

masterdb=# select * from foo ;
 id |         name         | image
----+----------------------+-------
  1 | Raghav               | test1
  2 | Rao                  | test2
  3 | Rags                 | test3
  4 | PG-Experts           | Image2
(4 rows)

masterdb=# insert into foo values (5,'PG-Experts','Image3');
ERROR:  Slony-I: Table foo is replicated and cannot be modified on a subscriber node - role=0
Perfect... This is what we are looking, now slavedb(Subscriber Node) running PG 9.3.5 version accepting data and masterdb(Origin Node) receiving the slavedb data. Also its rejecting DMLs executed on masterdb.

Slony-I Logs shows the origin/subscriber node id movements at the time of Switchover:
2014-12-12 04:55:06 PST CONFIG moveSet: set_id=1 old_origin=1 new_origin=2
2014-12-12 04:55:06 PST CONFIG storeListen: li_origin=1 li_receiver=2 li_provider=1
2014-12-12 04:55:06 PST CONFIG remoteWorkerThread_1: update provider configuration
2014-12-12 04:55:06 PST CONFIG remoteWorkerThread_1: helper thread for provider 1 terminated
2014-12-12 04:55:06 PST CONFIG remoteWorkerThread_1: disconnecting from data provider 1
...
...
2014-12-12 04:55:11 PST INFO   start processing ACCEPT_SET
2014-12-12 04:55:11 PST INFO   ACCEPT: set=1
2014-12-12 04:55:11 PST INFO   ACCEPT: old origin=1
2014-12-12 04:55:11 PST INFO   ACCEPT: new origin=2
2014-12-12 04:55:11 PST INFO   ACCEPT: move set seq=5000006393
2014-12-12 04:55:11 PST INFO   got parms ACCEPT_SET
If you encounter any issues at this stage, you can switchback to older version. After switchback you can continue with Older version until your application or other issues fixed. This is the perfect rollback plan without wasting much of time in case of issues after switchover..
-bash-4.1$ slonik switchback.slonik
switchback.slonik:8: Set 1 has been moved from Node 2 to Node 1

slavedb=# insert into foo values (5,'PG-Experts','Image3');
ERROR:  Slony-I: Table foo is replicated and cannot be modified on a subscriber node - role=0

masterdb=# insert into foo values (5,'PG-Experts','Image3');
INSERT 0 1

slavedb=# select * from foo ;
 id |         name         | image
----+----------------------+-------
  1 | Raghav               | test1
  2 | Rao                  | test2
  3 | Rags                 | test3
  4 | PG-Experts           | Image2
  5 | PG-Experts           | Image3
(5 rows)
Very Nice...!!! Is this not the exact rollback with minimum downtime ? Yes, its a perfect switching between nodes without missing a transaction.

Logs showing the switchback from Subscriber to Origin Node:
2014-12-12 04:58:45 PST CONFIG moveSet: set_id=1 old_origin=2 new_origin=1
2014-12-12 04:58:45 PST CONFIG storeListen: li_origin=2 li_receiver=1 li_provider=2
2014-12-12 04:58:45 PST CONFIG remoteWorkerThread_2: update provider configuration
2014-12-12 04:58:45 PST CONFIG remoteWorkerThread_2: helper thread for provider 2 terminated
2014-12-12 04:58:45 PST CONFIG remoteWorkerThread_2: disconnecting from data provider 2
2014-12-12 04:58:46 PST CONFIG storeListen: li_origin=2 li_receiver=1 li_provider=2
...
...
2014-12-12 04:58:47 PST INFO   start processing ACCEPT_SET
2014-12-12 04:58:47 PST INFO   ACCEPT: set=1
2014-12-12 04:58:47 PST INFO   ACCEPT: old origin=2
2014-12-12 04:58:47 PST INFO   ACCEPT: new origin=1
2014-12-12 04:58:47 PST INFO   ACCEPT: move set seq=5000006403
2014-12-12 04:58:47 PST INFO   got parms ACCEPT_SET
2014-12-12 04:58:48 PST CONFIG moveSet: set_id=1 old_origin=2 new_origin=1
By this time you might have noticed, none of the transactions are lost during switching operation  between PostgreSQL versions. Only downtime might be your application to start/stop for connecting  to Origin and Subscriber nodes, but whereas Origin/Subscriber nodes are never taken down they are just up and running.

Remember, the method shown here is not only useful for upgrades however its the same method in Slony-I  for moving between Nodes.

Thank you for your patience :). Hope this post helps you to upgrade PostgreSQL with minimum downtime using Slony-I including proper rollback plan.

--Raghav

Thursday, November 20, 2014

How to replicate only INSERTs not DELETEs/UPDATEs on Slony Slave Node ?

In the first place, we need to know about why such requirement needed. IMO, its absolutely a business necessity to maintain some kind of historical data on the target database(Slave Node). Especially, out of multiple slave nodes one of the slave node to retain the very first form of the data when it initially written into the database.

To accomplish this requirement, we should come up with some kind of filters like TRIGGERs/RULEs on Slave Node so that it avoids relaying DELETE and UPDATE statements. Since we are dealing with Slony-I, it doesn't have such built-in mechanism to filter DML's while replaying them on slave node though it has gathered all events from the Master node.(AFAIK Mysql,Oracle,SQL Server do support filters).

To get this straight, traditional Slony-I way maintains uniqueness of rows across all the nodes with its core concept of tables must have primary keys. In such architecture design, its hard to exclude DELETE/UPDATE statements, take an example of primary key column "orderid" of "orders" table has a first INSERT statement with value 100 and its been replicated as first form on filtered Slave Node. Later a DELETE statement executed for "orderid=100" and deleted row, now if any INSERT or UPDATE statement attempts to use the "orderid=100" then Slave node hits with duplicate key violation and it simple break the replication.
ERROR:  duplicate key value violates unique constraint "reptest_pkey"
DETAIL:  Key (id)=(2) already exists.
CONTEXT:  SQL statement "INSERT INTO "public"."reptest" ("id", "name") VALUES ($1, $2);"
.....
or
....
CONTEXT:  SQL statement "UPDATE ONLY "public"."reptest" SET "id" = $1 WHERE "id" = $2;"
2014-11-17 23:18:53 PST ERROR  remoteWorkerThread_1: SYNC aborted
Thus, implementing rule not an issue yet one should be extremely cautious when its in place. In reality however applying these filters on Slony-I slave node are very fragile, especially application/developer should always keep this in mind any duplicate entry of row by INSERT OR UPDATE could break the replication.

As DML rules not possible alone with Slony-I, we can make use of PostgreSQL CREATE RULE...ON DELETE/ON UPDATE DO INSTEAD NOTHING and apply that RULE on table by ALTER TABLE...ENABLE REPLICA RULE to void DELETE/UPDATE statement. Using this option takes a lot of discipline, so you can ensure your application and staff members really follow these rules.

To continue with steps, you should have slony setup, on the off chance that you need to setup up you can refer to my past post here.

Steps on Slave Node (Master DB: postgres, Slave DB: demo, Port: 5432):

1. Stop slon daemons
2. Create ON DELETE and ON UPDATE DO INSTEAD NOTHING rule
demo=# CREATE RULE void_delete AS ON DELETE TO reptest DO INSTEAD NOTHING;
CREATE RULE
demo=# CREATE RULE void_update AS ON UPDATE TO reptest DO INSTEAD NOTHING;
CREATE RULE
3. Apply RULE on table
demo=# ALTER TABLE reptest ENABLE REPLICA RULE void_delete;
ALTER TABLE
demo=# ALTER TABLE reptest ENABLE REPLICA RULE void_update ;
ALTER TABLE
4. Start Slon daemons

Now, you can notice below that UPDATE/DELETE has no impact on Slave Node:
postgres=# delete from reptest where id =2;
DELETE 1
postgres=# update reptest set id=2 where id=1;
UPDATE 1

--On Master
postgres=# select * from reptest ;
 id |    name
----+------------
  2 | A
(1 row)

--On Slave
demo=# select * from reptest ;
 id |    name
----+------------
  1 | A
  2 | C
(2 rows)
If INSERT statement executed with value 1 then it will break the replication. Be noted...!!

Remember, there other ways to full-fill this request like dblinks, Triggers like BEFORE DELETE...return NULL value from function, but I believe the most efficient way would be to use RULE/ENABLE REPLICA RULE when you are working with Slony replication.

By now you might have read many blogs on Logical Decoding Replication slots new feature in PostgreSQL 9.4, hope in future it might include the concept of filter DMLs on Slave.

Thank you for visiting.

--Raghav

Thursday, June 5, 2014

Utilising caching contrib's pg_prewarm and pg_hibernator in PostgreSQL 9.4.

Numerous DBA's (counting me), put questions all the time to PostgreSQL hackers/developers/architects on mailing list:
  • Q1. Does PG has the ability to cache/warm a relation ?
  • Q2. Is it possible to return to prior state of cache where it was left before shutting down the database server because of maintenance ?

In earlier releases of PostgreSQL, there in no chance of warming a relation or storing a cache states, but from PostgreSQL 9.4 onwards each of the above queries(Q1,Q2) addressed with two contrib modules pg_prewarm and pg_hibernator. Despite the very fact that they're distinctive in practicality, however the combination appears to be extremely viable and useful in future for DBA's. In short about contrib's:
pg_prewarm contrib (Author: Robert Haas), provides the capability to load a relation data into OS buffer cache or PG buffer cache. It has the functionality of first or last block number to prewarm. (Note: It has no special protection on pre-warmed data from cache eviction and also if database instance restarted then re-warm needed on the relations).
pg_hibernator contrib (Author: Gurjeet Singh), provides the capability to automatically save the list of shared buffer contents to disk on database shutdown, and automatically restores the buffers on database startup, much the same as save/restore a snapshot of shared_buffers. It make use PG 9.3 module to register "background worker process" and spawns two process "Buffer Saver", "Buffer Reader" for save/restore. Interestingly, with a little hack, pg_hibernator can also allow standby slave to start serving queries with full speed with same contents of master, will see that in a minute :).
Lastly, we need pg_buffercache module to look inside the current contents of PostgreSQL shared_buffers. This module helps to understand what percentage buffer's occupied by a relation.

Let's put all these contrib's into play and see how they serve the purpose of two questions(Q1,Q2). Am going to use a table 'foo' of size 885MB on my local VM, along with a standard pg_buffercache query.
SELECT c.relname,
       count(*) AS buffers
FROM pg_class c
INNER JOIN pg_buffercache b ON b.relfilenode=c.relfilenode AND c.relname='foo'
INNER JOIN pg_database d ON (b.reldatabase=d.oid AND d.datname=current_database())
GROUP BY c.relname
ORDER BY 2 DESC LIMIT 10;
Usage of pg_prewarm contrib and warming 'foo' table.
postgres=# create extension pg_prewarm;
CREATE EXTENSION 
postgres=# \dt+
                    List of relations
 Schema | Name | Type  |  Owner   |  Size  | Description
--------+------+-------+----------+--------+-------------
 public | foo  | table | postgres | 885 MB |
(1 row)
postgres=# select pg_prewarm('foo');
 pg_prewarm
------------
     113278
(1 row)
--pg_buffercache query output
 relname | buffers
---------+---------
 foo     |  113278
(1 row)
Very simple and straightforward usage of pg_prewarm with a output of blocks warmed in shared_buffers for relation 'foo'. From pg_buffercache query output we can evaluate it that there are 113278 (113278 * 8 / 1024 = 884MB ) buffers of 8KB block size of relation 'foo' which matches with pg_prewarm output. Here, if Postgres server restarts because of some reason, shared_buffers are empty and DBA's need to re-warm again to come back to past warm stage. For a single table, re-warm is always simple except for a group of tables its agony.

At this point, we can make use of pg_hibernator contrib, because it has the flexibility to save the shared_buffer's contents and restore it back at startup. Let's enable pg_hibernator/pg_prewarm together and run a similar exercise by simply including one step of restart and see if the cache state return back as is or not. Am not going to cover installation of pg_hibernator, because on git its very well described, however I would directly jump to implementation part and start the server with pg_hibernator.
postgres 24623     1  0 02:06 pts/4    00:00:00 /usr/local/pgpatch/pg/bin/postgres -D /usr/local/pgpatch/pg/data_10407
postgres 24627 24623  0 02:06 ?        00:00:00 postgres: logger process
postgres 24631 24623  0 02:06 ?        00:00:00 postgres: checkpointer process
postgres 24632 24623  0 02:06 ?        00:00:00 postgres: writer process
postgres 24633 24623  0 02:06 ?        00:00:00 postgres: wal writer process
postgres 24634 24623  0 02:06 ?        00:00:00 postgres: autovacuum launcher process
postgres 24635 24623  0 02:06 ?        00:00:00 postgres: archiver process
postgres 24636 24623  0 02:06 ?        00:00:00 postgres: stats collector process
postgres 24637 24623  0 02:06 ?        00:00:00 postgres: bgworker: Buffer Saver
postgres 24638 24623 11 02:06 ?        00:00:01 postgres: bgworker: Block Reader 2

In database server logs at startup time:

-bash-4.1$ more postgresql-2014-06-02_083033.log
LOG:  database system was shut down at 2014-06-02 08:13:00 PDT
LOG:  starting background worker process "Buffer Saver"
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
Since, its first time pg_hibernator in play, you can see two process and also logs with some information regarding start of "Buffer Saver". Now, lets prewarm relation 'foo' and restart the server, later check the buffer status whether pg_hibernator filled the buffer back where it was left.
-bash-4.1$ psql -p 10407
psql (9.4beta1)
Type "help" for help.

postgres=# select pg_prewarm('foo');
 pg_prewarm
------------
     113278
(1 row)

--pg_buffercache query output
 relname | buffers
---------+---------
 foo     |  113278
(1 row)
postgres=# \q

-bash-4.1$ /usr/local/pgpatch/pg/bin/pg_ctl -D /usr/local/pgpatch/pg/data_10407 stop
waiting for server to shut down.... done
server stopped

-bash-4.1$ ls -l $PGDATA/pg_hibernator/
total 12
-rw------- 1 postgres postgres  160 Jun  3 01:41 1.global.save
-rw------- 1 postgres postgres  915 Jun  3 01:41 2.postgres.save  

-bash-4.1$ /usr/local/pgpatch/pg/bin/pg_ctl -D /usr/local/pgpatch/pg/data_10407 start
server starting
We have restarted the database server, lets examine the logs
-bash-4.1$ more postgresql-2014-06-03_020601.log
LOG:  database system was shut down at 2014-06-03 02:05:57 PDT
LOG:  starting background worker process "Buffer Saver"
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
LOG:  registering background worker "Block Reader 2"
LOG:  starting background worker process "Block Reader 2"
LOG:  Block Reader 2: restored 113433 blocks
LOG:  Block Reader 2: all blocks read successfully
LOG:  worker process: Block Reader 2 (PID 24638) exited with exit code 1
LOG:  unregistering background worker "Block Reader 2"
LOG:  registering background worker "Block Reader 1"
LOG:  starting background worker process "Block Reader 1"
LOG:  Block Reader 1: restored 20 blocks
LOG:  Block Reader 1: all blocks read successfully
LOG:  worker process: Block Reader 1 (PID 24664) exited with exit code 1
LOG:  unregistering background worker "Block Reader 1"
So, "Buffer Reader" has restored blocks of 113433 + 20, out of which 113278 belongs to relation 'foo'. Great, lets connect and see.
-bash-4.1$ psql -p 10407
psql (9.4beta1)
Type "help" for help.

--pg_buffercache query output
 relname | buffers
---------+---------
 foo     |  113278
(1 row)
Cool... pg_hibernator has brought back the cache warmed state without DBA's interference.

Another good thing about pg_hibernator, a newly created standby can have the same shared buffer contents as the master, so that the standby can start serving queries at full speed. To do this exercise, while taking a backup of $PGDATA directory, I have passed SIGTERM to "Buffer Saver" process so that it writes the current state shared_buffers content to disk($PGDATA/pg_hibernator directory) and then followed with standby setup.
postgres 24637 24623  0 02:06 ?        00:00:00 postgres: bgworker: Buffer Saver
postgres 24653 15179  0 02:06 ?        00:00:01 postgres: wal receiver process   streaming 1/6A000A10
postgres 24654 24623  0 02:06 ?        00:00:00 postgres: wal sender process postgres ::1(65011) streaming 1/6A000A10
After setup, my slave started with same content of primary
-bash-4.1$ psql -p 10477
psql (9.4beta1)
Type "help" for help.

postgres=# select pg_is_in_recovery();
 pg_is_in_recovery
-------------------
 t
(1 row)

--pg_buffercache query output
 relname | buffers
---------+---------
 foo     |  113278
(1 row)
Thanks to both the authors for a wonderful extension on caching.

--Raghav