Update 2013-06-05: This post has been translated into Czech by Kate Osipova. Thanks Kate!
While it rarely happens, sometimes your SQL statements for building a database (or database item) stops working due to a version difference on your server(s). In my case, we were working on a development server which is "up to date" with the latest version of PostgreSQL (version 9.1 at the moment). The SQL code to create our database works fine here. However our staging server is using an older version of Ubuntu, and therefore an older version of PostgreSQL - 9.0 to be exact. Some of our code needs to make use of regular expressions and there seems to be a change in how the regex functions are called. So this means we needed to upgrade PostgreSQL on the staging server.
The upgrade was pretty straight forward - we just ran "sudo do-release-upgrade" a couple of times to bring the system up to date with the latest Ubuntu. In the process this also installed PostgreSQL 9.1. HOWEVER, it installs 9.1 beside version 9.0. Both versions run simultaneously. This makes sense in that a 9.0 database may not actually work with 9.1 and thereby break a mission critical application. So this is the "safe" option. But in our case we wanted a full upgrade and removal of the older version.
There are two official methods to upgrade a database, recommended by the official PostgreSQL documents. The first is to do a pg_dump of your database(s), stop the old database cluster, the restore the dump to the new cluster. Test, and if all goes well remove the old cluster. OR, you can use the automated tool pg_upgrade.
This is misleading though in the case of a Ubuntu server. There is no pg_upgrade file. There is however a pg_upgradecluster. I believe this is just a naming discrepancy between Pg versions and linux distribution configurations. Regardless, the upgrade cluster method works very nicely. Here's the process:
- First, backup everything. Run pg_dumpall > mybackup.sql to get the databases backed up. Next, backup the database clusters (where the data is actually stored). You can determine where these clusters are by using the pg_lsclusters command. For each cluster, you want to do a file copy of the data directories to a backup directory somewhere. (if you database is large, you may need large amounts of disk space and time to do this). These backups are your failsafe in case something bad happens - do NOT skimp on these steps. If something bad does happen, just reverse the processes here (i.e. copy the clusters back to the original locations, or do a pg_restore).
- Remove the new cluster. Chances are it is empty, but if you have data in there, it should have been backed up already (see the step above). You drop the cluster with the command pg_dropcluster 9.1 main. Change the 9.1 part to match the version of your cluster, and the 'main' part to match your cluster name (or path). This step is needed because a new cluster will be automatically created in later steps, and will fail if the cluster already exists.
- Do the upgrade. Run pg_upgradecluster 9.0 main. The 9.0 parameter indicates the version of the CURRENT cluster. The 'main' parameter indicates the cluster name/location. We did not specify a version to upgrade to (i.e. "-v 9.1"), so the default of the most current version is used. If all goes well, you'll see a number of status messages over the next few minutes (depending on how large your databases are). When the process is complete, you'll have a new database cluster for the most recent version AND the original database cluster. The original cluster is un-touched except for being tagged to not start up automatically, and when it is started up (via pg_ctlcluster) it operates on a different network port to avoid conflicts.
- Test. Everything should be just work. Prove it though. Once you are satisfied your applications are running properly and updating the new cluster you can drop the old cluster. Or keep that cluster around for safe keeping. That is up to you and your comfort level.
In my case, I had a gotcha crop up. The new cluster would not start. It would show this error message:
# /etc/init.d/postgresql restart * Restarting PostgreSQL 9.0 database server [ OK ] * Restarting PostgreSQL 9.1 database server * The PostgreSQL server failed to start. Please check the log output: 2012-02-15 11:03:35 MST FATAL: could not create shared memory segment: Invalid argument 2012-02-15 11:03:35 MST DETAIL: Failed system call was shmget(key=5432001, size=34922496, 03600). 2012-02-15 11:03:35 MST HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 34922496 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections. If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for. The PostgreSQL documentation contains more information about shared memory configuration. [fail]
As you can see it is complaining about the memory settings. To fix this I went into the /etc/postgresql/9.1/main/postgresql.conf file and found the shared_buffers line. It was set to 28M I changed this to 20M, saved the file, and then restarted the Postgresql services. Everything worked properly. I wouldn't recommend dropping this setting too low though. If you continue to have problems, you may need to see if you can increase the kernel's SHMMIN setting. (The shared buffer settings are normally just fine in their default values, but may need to be tweaked when you are optimizing the server for speed or large databases.)
So, there you have it. A little wordy perhaps, but when I was searching, the details were glossed over (like which version do I specify - the old, or the desired?). This needed me to dig deeper, look at man pages, find examples, etc. All of which took time. Hopefully this document saves someone a that time.