As all of you know after upgrading the database server from one version to other major version, ANALYZE command should be executed to update the pg_catalogs on newly populated data. On a huge upgraded database, its a challenge for the application to gain its performance back without updating the statistics. In PostgreSQL 9.4, an option "vacuumdb --analyze-in-stages" will make this work faster to produce usable statistics required by the optimizer. It runs in three stages with different configuration settings(default_statistics_target/vacuum_cost_delay) to analyze the database.
If the database followed any of the up-gradation procedure like pg_dump/pg_restore or pg_upgrade, then its recommended to use "vacuumdb --analyze-in-stages"
Sample output:
If the database followed any of the up-gradation procedure like pg_dump/pg_restore or pg_upgrade, then its recommended to use "vacuumdb --analyze-in-stages"
Sample output:
bash-4.1$ /usr/local/pgpatch/pg/bin/vacuumdb -p 9999 --analyze-in-stages -d tester Generating minimal optimizer statistics (1 target) Generating medium optimizer statistics (10 targets) Generating default (full) optimizer statistics
2 comments :
I'm a little surprised this is all that useful. When we needed to get quick statistics for apps after an upgrade, we typically would parallelize the work through separate analyze calls, tuned to the level of CPU that are available. Only in rare cases does a single tables analyze take so long that you can't include it as part of your outage window (remember, we are talking dump/restore type of issues).
"parallelize the work through separate analyze calls, tuned to the level of CPU that are available." ?
Do you mean this at database level ?
Post a Comment