Wednesday, June 15, 2011

PostgreSQL Upgradation

Its always a challenging task when moving from one version to another version on new Server. My presentation below is to upgrade the Old version of PostgreSQL 8.3 to PostgreSQL 9.0.4 on NEW SERVER. Basically, steps are very simple for upgradation, but need to take some extra care when bouncing the new server before and after restoration. Latest PostgreSQL, has lot of fixes in it, so it is recommended to use the new binaries for entire upgradation process.

Step 1 (On New Server PG 9.0.x):

First step in PostgreSQL is to set SHMMAX/SHMALL at OS-level, because shared_buffers purely depends on these setting, below script will give you the figure how much should be the SHMMAX/SHMALL on the basis of Server's Memory. I have taken the script, written by Greg Smith, which is very handy in setting the SHMMAX/SHMALL.

For better understanding on shmmax/shmall, here is the link
http://www.puschitz.com/TuningLinuxForOracle.shtml#SettingSharedMemory
vi shmsetup.sh
#!/bin/bash
# Output lines suitable for sysctl configuration based 
# on total amount of RAM on the system.  The output 
# will allow up to 50% of physical memory to be allocated 
# into shared memory.
# On Linux, you can use it as follows (as root): 
# 
# ./shmsetup >> /etc/sysctl.conf 
# sysctl -p
#
# Early FreeBSD versions do not support the sysconf interface 
# used here.  The exact version where this works hasn't 
# been confirmed yet.

page_size=`getconf PAGE_SIZE` 
phys_pages=`getconf _PHYS_PAGES`

if [ -z "$page_size" ]; then 
  echo Error:  cannot determine page size 
  exit 1 
fi

if [ -z "$phys_pages" ]; then 
  echo Error:  cannot determine number of memory pages 
  exit 2 
fi

shmall=`expr $phys_pages / 2` 
shmmax=`expr $shmall \* $page_size` 

echo \# Maximum shared segment size in bytes 
echo kernel.shmmax = $shmmax 
echo \# Maximum number of shared memory segments in pages 
echo kernel.shmall = $shmall
:wq

Execute the script :
#chmod +x shmsetup.sh
#./shmsetup.sh
Step 2 (On New Server PG 9.0.x):

Install the latest version of PostgreSQL 9.0.4 on the new server. Link below for 32bit and 64 bit:
http://get.enterprisedb.com/postgresql/postgresql-9.0.4-1-linux.bin
http://get.enterprisedb.com/postgresql/postgresql-9.0.4-1-linux-x64.bin

Most of the cases, as well recommended, to keep the "pg_xlog" in different mount point. You can create a new cluster with initdb command by selecting different "pg_xlog" mount point:-

$initdb -D DATA_DIRECTORY -X PG_XLOG_LOCATION
Note: initdb command will not create the 'pg_log' directory under new cluster, you need to create it explicitly.

After installation and creation of the cluster set the environment variables like PGDATA, PATH, PGDATABASE, PGPORT, PGUSER etc., in ".bash_profile" under postgres user.

Step 3 (On Old Server PG 8.3.x):

As I said, use the new binaries for all the commands you are executing on this server. If you dont have the new binaries on this server, install a copy of new binaries with source installation to any new location without overriding the existing binaries.

Download:-
http://wwwmaster.postgresql.org/redir/198/h/source/v9.0.4/postgresql-9.0.4.tar.gz
#tar xvf postgresql-9.0.4.tar.gz
#cd postgresql-9.0.4
#./configure --prefix=/usr/pg904
#make
#make install
New binaries location will be "/usr/pg904/"

Step 4 (On Old Server PG 8.3.x):

Intial step would be taking dump of global objects like users, tablespaces, etc., using pg_dumpall.
$ /usr/pg904/bin/pg_dumpall -p $PGPORT -g > /pg83_backups/global_dump.sql

Step 5 (On Old Server PG 8.3.x):

Take the dump of all the databases in a cluster using below command. Also generate logs for each dump to analyze if any issue arises in the dumps
$ usr/pg904/bin/pg_dump -Fc -v -U PGUSER -p PGPORT DBNAME -f /pg83_backups/dbname.dmp  >> /pg83_backups/dbname.log 2>>/pg83_backups/dbname.log

if the database is bigger, run in nohup

$ nohup usr/pg904/bin/pg_dump -Fc -v -U PGUSER -p PGPORT DBNAME -f /pg83_backups/dbname.dmp >> /pg83_backups/dbname.log 2>>/pg83_backups/dbname.log &

Step 6 (On Old Server PG 8.3.x):

Move all the dumps(/pg83_backups) to new server.

Step 7 (On New Server PG 9.0.x):

As per our STEP 2, New Server will have the latest binaries of PG 9.0.4 and cluster, to speed up the restoration process we need to tune some of the settings in $PGDATA/postgresql.conf file before and after.

Before restoration settings in postgresql.conf file(memory settings my differ as per the available RAM on the box):-
Memory Settings:
---------------
shared_buffers= (as per the shmmax settings, Maximum 8 gigs on 64 bit, 4 gigs on 32 bit)
work_mem= (in between 40MB - 100 MB)
maintenance_work_mem = (in between 1GB - 5 GB)

Checkpoints Settings:
--------------------
checkpoint_segments=(in between 128 - 256)
checkpoint_timeout=(default is 15mns make to 1h)

Autovacuum settings:
-------------------
autovacuum=off
track_counts=off

Sync to Disk:
------------
fsync=off
full_page_writes=off
synchronous_commit=off

Background Writer settings:
--------------------------
bgwriter_delay=(default 200ms, change to 50ms)

Changes demands to restart the cluster. 

$pg_ctl -D $PGDATA restart
or
$pg_ctl -D $PGDATA stop -m f
$pg_ctl -D $PGDATA start

Step 8 (On New Server PG 9.0.x):

First restoration is the global object.
$PGPATH/psql –d DBNAME -p $PGPORT -U $PGUSER -f /pg83_backups/global_dump.sql

Step 9 (On New Server PG 9.0.x):

Restoring the database can be done parallelly, means from PG 8.4 onwards we have an option -j will create multiple connection to PostgreSQL parallelly and fasten the restoration process.

http://www.postgresql.org/docs/current/static/app-pgrestore.html

Option -j, depends on number of CPUs the NEW Server has, for example if I have 4 core, I can go with -j 4. Each core can spawn one extra process with pg_restore. Use this option as per your CPU cores, you can also get the number of processors information with this command:
$ cat /proc/cpuinfo | grep -i processors | wc -l

Start restoring each database with the dumps to new server, if the database is bigger, run in nohup. Also generate the logs on restore for further analysis on the restoration.
$PGPATH/pg_restore –d DBNAME -Fc –v -p $PGPORT -U PGUSER /pg83_backups/dbname.dmp >>/pg83_backups/restore_dbname.log 2>>/pg83_backups/restore_dbname.log 
or
nohup $PGPATH/pg_restore –d DBNAME -Fc –v -p $PGPORT -U PGUSER /pg83_backups/dbname.dmp >>/pg83_backups/restore_dbname.log 2>>/pg83_backups/restore_dbname.log &
While, restoration is in progress you can monitor in two ways at OS-level using "top -cu postgres" or "ps -ef | grep postgres", at DB-level using "select * from pg_stat_activity".

Step 10 (On New Server PG 9.0.x):

Important step, after successfull restoration, it is recommended to update the catalogs with ANALYZE command.
$$PGPATH/vacuumdb -p $PGPORT -a -Z -v >>/pg83_backups/analyze.log 2>>/pg83_backups/analyze.log 
or
$nohup /usr/local/pgsql/bin/vacuumdb -p 5433 -a -Z -v >>/pg83_backups/analyze.log 2>>/pg83_backups/analyze.log &
Step 11 (On New Server PG 9.0.x):

After ANALYZE, you need to change the settings to normal or as per the demand of application by editing the $PGDATA/postgresql.conf file.
Memory Settings:
---------------
shared_buffers= (as per the shmmax settings, Maximum 8 gigs on 64 bit, 4 gigs on 32 bit)
work_mem= (in between 5MB - 40MB)
maintenance_work_mem = (in between 1GB -- 2 GB)

Checkpoints Settings:
--------------------
checkpoint_segments=(in between 64 - 128)
checkpoint_timeout=(default)

Autovacuum settings:
-------------------
autovacuum=on
track_counts=on

Sync to Disk:
------------
fsync=on
full_page_writes=on
synchronous_commit=on

Background Writer settings:
--------------------------
bgwriter_delay=(50ms)
Step 12 (On New Server PG 9.0.x):

After the above changes restart the cluster.
$pg_ctl -D $PGDATA restart
or
$pg_ctl -D $PGDATA stop -m f
$pg_ctl -D $PGDATA start
You also need to do some changes in $PGDATA/pg_hba.conf file for allowing application connections. Always keep a copy of $PGDATA/*.conf files(PG 8.3.x) on New Server for doing any changes to .conf files.

Do post your comments or suggestions, which are greatly appreciated.

Regards
Raghav

8 comments :

Chiru said...

Hi Raghav,

kindly share some details about below mentioned postgres.conf file parameters.

# Sync to Disk:
# ------------
# fsync=on
# full_page_writes=on
# synchronous_commit=on
#max_files_per_process = 1000

Thanks
Chiranjeevi.R
Hyderabad.

Raghavendra said...
This comment has been removed by the author.
Raghavendra said...

Good Question Chiranjeevi

To speed-up upgrade process we adjust these parameters for loading data and once upgradation process is done and users are allowed to connect and do transaction we turn them on.

Fsync (default on)

Fsync mode is to flush every completed transaction to disk, guaranteeing that if the Operating System crashes or the server suffers a suden power loss, all your data is safely stored on disk. PostgreSQL is slower than most of commercial databases for write operations, because due to the fact that some of them(other Database) do not perform this conservative flushing to disk operation in their default modes. In no-fsync mode, we are usually faster than commercial databases, though in this mode, an OS crash could cause data corruption. Now Choice is yours.

Full_page_writes (default on)

Much like fsync, turning this parameter off increases in performance, but lead to data corruption(catalogs). You should only consider adjusting this parameter if you're doing extensive writes into your filesystem in order to assure partial page writes do not happen.

synchronous_commit (default on)

Transaction commit is normally synchronous when the server waits for the transaction's WAL records to be flushed to disk before returning a success indication to the client. The client is therefore, guaranteed that a transaction reported to be committed will be preserved, even in the event of a server crash immediately after.

The parameter synchronous_commit has some similarities with the parameter fsync. The difference is that synchronous_commit = off in a crash some of the last Transactions can lose, but fsync = off in a crash the entire database system can destroy

Please follow the link below for more information.

http://developer.postgresql.org/pgdocs/postgres/runtime-config-wal.html

--Raghav

Anonymous said...

Hi Raghav,
Informative description.......

Thanks,
Ankur

Anonymous said...

hi Raghav, i have a doubt. what is the default checkpoint_timeout ? is it 5mins or 15mins.

Regards
Aarya

Raghavendra said...

5 Mins..
http://www.postgresql.org/docs/9.2/static/runtime-config-wal.html

Anonymous said...

Thanks Raghav. jst i had doubt on this becoz u have mentioned in step 7 as checkpoint_timeout=(default is 15mns make to 1h).

Regards
Aarya

Anonymous said...

Raghav, may i know max tables can be created in a database and how many database can be created in a single cluster ??

thnks and regards,
surendar

Post a Comment

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License