Wednesday, December 23, 2015

json functions for dynamic field modification in plpgsql, including in triggers.

One of the occasionally annoying things about Plpgsql is that it's very static, and any sort of dynamic access to records, say by using a trigger argument, is difficult. I just conducted a little experiment that seems to make life a bit easier. The json_populate_record() function returns a record of the same type as its first argument. Mostly we just use NULL::record_type here, but it turns out that in a trigger function it works perfectly happily with NEW and OLD, and they can be modified using any sort of dynamic values you like, including values from TG_ARGV

Here's a simple example:


mydb=# create type rainbow as enum('r','o','y','g','b','i','v');
CREATE TYPE
mydb=# create table foo (a text, b int, c boolean, d rainbow);
CREATE TABLE
mydb=# create or replace function foo_tr()
returns trigger
language plpgsql as $$
begin
  if tg_op = 'INSERT' or tg_op = 'UPDATE'
  then
    return json_populate_record(NEW,json_object(TG_ARGV));
  else
    return old;
  end if;
end;
$$;
CREATE FUNCTION
mydb=# create trigger foo_tri before insert on foo for each row execute procedure foo_tr(d,i);
CREATE TRIGGER
mydb=# create trigger foo_tru before update on foo for each row execute procedure foo_tr(c,true);
CREATE TRIGGER
mydb=# insert into foo values ('x',1,null,'r') returning *;
 a | b | c | d 
---+---+---+---
 x | 1 |   | i
(1 row)

INSERT 0 1
mydb=# update foo set c = null, d = 'o' returning *;
 a | b | c | d 
---+---+---+---
 x | 1 | t | o
(1 row)

UPDATE 1
mydb=#

Tuesday, December 22, 2015

Numeric scale and precision

From the quick tip file:

I have lost track of the number of times I have seen people get this wrong. The precision of numeric doesn't specify the number of digits allowed before the decimal point, it specifies the total number of digits before and after the decimal point. So the number allowed before the decimal point is the precision minus the scale. And that means that if you want to increase the scale for finer grained numbers without reducing the range of numbers allowed you need to increase both the precision and the scale.

Let's say you want to have money amounts of US dollars with no fractional cents and up to single trillions of dollars. You would use numeric(15,2). That allows thirteen digits before the decimal and two after. Now if you want to change that to allow fractional cents in hundredths, you will need to use numeric(17,4), which still allows thirteen digits before the decimal but now allows four after.

Saturday, December 12, 2015

psql -c will no longer imply --no-psqlrc

Recently some very good changes were made that allow you to have multiple -c and -f arguments to psql. This is going to be quite useful. One of the things this changed as a part of this is that -c no longer implies --no-psqlrc. I found this out the hard way when it proved to be that cause of unpleasant failures by one of my buildfarm animals. This animal runs on my normal Unix workstation, where I have a .psqlrc doing various things, and it also runs a non-standard module that calls "psql -c". Until now there was no issue, because -c meant that my .psqlrc was not processed. The solution was to as "--no-psqlrc" to those psql calls. But I suspect this might catch quite a few people once 9.6 is released.

Saturday, November 21, 2015

Announcing Release 4.16 of the PostgreSQL Buildfarm client


I have just released version 4.16 of the PostgreSQL Buildfarm client

It can be downloaded from http://www.pgbuildfarm.org/downloads/latest-client.tgz

Several bugs have been fixed, and there are these non-trivial changes:

  • capturing of TAP test logs
  • bin check enabled on Windows
  • rm_worktrees feature
  • an experimental module to run Sepgsql tests
  • try database shutdown following a start failure
  • report all PGBuild module versions to server

rm_worktrees is enabled by putting this in your config file:

rm_worktrees => 1,

The effect is that at the end of a run the checked out work tree is removed, leaving just the git repository, which in turn might be mostly linked to your HEAD branch if you use the git_use_workdirs option. Before a run, the work tree will be checked out again. The net effect is a significant saving in space used. with these two options, the additional space for each branch except when it's actually building is reduced to less than 30Mb. On crake, the git mirror, branch trees and cache now come in at about 1.5Gb. That's a lot less than it used to be. The additional cost of checking out the worktree each time is very modest.

Shutdown after a start failure tries to remedy a situation where we try to start the server, and don't detect that it has started, but it has in fact started. So now if we get a failure we try to shut down any server that might have started, just in case. This change is possibly redundant given the recent change where postgres detects that its data directory has disappeared and shuts down when it has, but it's probably worth having anyway.



Saturday, November 14, 2015

A record with all defaults

Fresh from yesterday's help file:

Say you want to insert a record into your table with all the default values set. Later on you'll update some of the values. You might want to do this so you can get the row's ID field. So you try this:
insert into mytable values () returning my_id;
but you find it doesn't work. VALUES isn't allowed to have an empty list. Here's what you do:
insert into mytable values (default) returning my_id;
This will insert the default value explicitly into first column in the table, and implicitly into all the other columns. It doesn't matter if any of the columns doesn't have an explicit default value set - in that case the default is NULL. Of course, that means this won't work if you have NOT NULL columns without a default set.

Sunday, November 8, 2015

Latest adventures with a buildfarm animal on Raspberry Pi 2

My buildfarm member axolotl, that runs (or ran) on a Raspberry Pi 2 running a Fedora 21 remix, died the other day, apparently with a corrupted file system.

While I'm trying to revive it, I thought I would try running the stock Fedora spin built for ARM, instead of a remix. This is pretty easy to do using the script found here. It looks like this is a bad idea, though. It's running so slowly just doing a "dnf update" that I wouldn't think of running a buildfarm member on it.

So, if I can't resuscitate the file system on the other SD card, I'll try a respin with a custom kernel again. There is someone maintaining a set here, which you can read about on the Raspberry Pi forums.

Update


fsck on the root partition managed to resuscitate it without apparent problems, So in the end I haven't had to jump thought more hoops.

Tuesday, November 3, 2015

Keeping config files under version control - reprise

I recently blogged about the virtue of putting PostgreSQL config files under version control. My attention was drawn to etckeeper, and a day or so ago I started playing with it, and it seems pretty nice. If you keep your configs in /etc like Debian/Ubuntu, then just deploying etckeeper is probably all you need to do. But if you are using Fedora, Redhat, Centos, or anything where the config files aren't in /etc, it's a little bit harder. But not too much. I found a nice little script at serverfault that you can put in etckeeper's commit.d directory that seems to do the trick and happily pulls files outside /etc into the repository. This seems like quite a good solution.

Friday, October 23, 2015

A little permissions problem

So you're on Fedora 22 and you want to run the latest stable PostgreSQL release. You install the PGDG repository package and then install postgresql94-server. You decide to test it out first, so as your normal login user you run initdb and then try to start the server using pg_ctl, something you've done a hundred times before. It should "just work," right? Not this time. You examine the log file and you see it complaining like this:
could not create lock file "/var/run/postgresql/.s.PGSQL.5432.lock": Permission denied
Ouch! I've never needed that before. These files have always gone in /tmp. OK, so why is it different now we are putting them elsewhere? It turns out that the directory has been created by the RPM with permissions different from /tmp, and only the postgres user can write there.

My solution (which I hope our RPM maintainers adopt) was to change the permissions on the directory like this, making them the same as those of /tmp:
sudo chmod 1777 /var/run/postgresql
After that when I try to start the database everything is as expected.

Friday, October 16, 2015

How to find what's causing that error

Tom Lane just posted a terrific piece of advice about how to find what's causing an error you don't know the source of, which is worth quoting:
What I would recommend is that you get the data onto a non-production
machine where you can play around a bit more. One thing you could do
then is run a build with debug symbols, attach to the backend process
with gdb, and set a breakpoint at "errfinish". Then provoke the error,
and capture a backtrace from the call to errfinish. That would greatly
narrow things down, though it might not be enough to isolate the bug
immediately.
I normally like to have debuginfo packages installed even on production machines so I can do this sort of thing quickly.

Another thing you can try in a non-production environment is to run your test case against a build with assertions enabled and see if it trips an assertion failure. That's often a quick way of finding where problems are occurring.

Thursday, October 8, 2015

Testing on Fedora

One of the things that is nice about Fedora is how easy it is to test something very quickly. They provide a considerable number of cloud images suitable for spinning up instances fast. In my case I have a machine with Vagrant and VirtualBox installed, and when we got a complaint recently about PostgreSQL misbehaving with some glibc locales in fedora 23, I wanted to test it. After downloading the image I wanted I was able to get running with this:

vagrant box add --name=f23 Fedora-Cloud-Base-Vagrant-23_Beta-20150915.x86_64.vagrant-virtualbox.box
mkdir f23
cd f23
vagrant init f23
vagrant ssh

It took me about 5 minutes from the time I discovered the image I wanted to test to starting on setting up to build PostgreSQL on it, and most of that was download time.

Redis FDW doesn't suport Redis Cluster

Redis Cluster is a way of sharding data across a group of federated Redis nodes. I was asked recently if this is supported by the Redis Foreign Data Wrapper. Unfortunately, the answer is "No." The Redis site coyly states that "At this stage one of the problems with Redis Cluster is the lack of client libraries implementations." What they don't mention is that their own C client library, hiredis, has no support for it at all, happily reporting back redirects to the client rather than following them. It's rather as if a web client library refused to follow an HTTP redirect. The Redis FDW is based on hiredis, so as the library doesn't support Redis Cluster, neither do we. Of course, at the expense of very large amounts of code we could make the FDW handle it, but this seems quite wrong architecturally. The FDW should not have to know or care that the Redis server it's connected to is sharded.

Friday, October 2, 2015

Keep your configs under version control

A conversation yesterday with a client: "What changes were made in the PostgreSQL configs?" "I don't recall exactly." This is why you should keep all the configs under version control, and have a regular monitoring check that the controlled version is the same as the operating version, so if someone makes a manual change without checking it in the monitoring alarm will go off.

If you keep your config in the data directory, in the default PostgreSQL style, probably the simplest way to do this is to create a git repository with a symbolic link to the data directory and a .gitignore that ignores everything in the data directory but the config files. The add the config files and you're done. If you keep the config files separately, as Debian does, then you can probably just create a git repository right in the config directory if you don't already have one at a higher level.

I have been guilty of not doing this in the past, but I have come to the strong conclusion that this should be a part of standard best practice.

Update

My attention has been drawn to etckeeper, which makes handling this super simple when you're using Debian-style config setups. It's available both for Debian and for RedHat-ish systems, as well as possibly others.

Tuesday, September 15, 2015

When to use json, and when to use jsonb

As well as being able to store JSON in the database, either as plain text (json type) or as a decomposed binary format (jsonb type) PostgreSQL has quite a number of useful utility functions for creating JSON from non-json data. In many cases, clients don't want to store the json or process it in any way, they just want to get hold of it outside the database. There are subtle differences between the two types - jsonb eliminates whitespace and duplicate object keys, and mangles the order of object keys.

PostgreSQL 9.5 will introduce jsonb equivalents of pretty much all the json building functions, namely json_agg, json_object_agg, json_build_array, json_build_object, json_object and to_json. The exceptions are the legacy functions array_to_json and row_to_json, for which we provided no jsonb equivalents. These are covered by to_jsonb.

If object key order preservation is important to you, or you want to be able to have duplicate keys, then you definitely don't want to be using the jsonb functions. But beyond that, if you don't want to store the json or process it in any way in general the jsonb functions are overkill, and they are definitely slower. A recent benchmark I did for a client had a query using the jsonb functions to be slower by a factor of about 4. When you think about it, this isn't entirely surprising - not only is it more work to turn data into jsonb, but when we're done we have to turn it back into a piece of text to deliver to the client. With json, once you've constructed it it's already text, so there's no extra work.

So don't assume that the jsonb functions are always going to be faster - in many cases they are not. Other things being equal, you will often want to be using the json generating functions.

On the other hand, if you want to store the data for later processing, e.g. by examining the contents of a field inside the json, jsonb is probably the way to go.

And of course, if speed is an issue you need to benchmark you application.

Thursday, August 27, 2015

Quick buildfarm recipe for CentOS and friends

I've been writing a little provisioning script for a vagrant Centos 7 machine that will be a buildfarm client.

Stripped of the vagrant stuff and some stuff that's special for the machine, here is what I did to get it running:

sudo yum install -y wget gcc make flex bison ccache git \
  perl-libwww-perl perl-Digest-SHA perl-devel perl-ExtUtils-Embed \
  zlib-devel openssl-devel readline-devel python-devel tcl-devel \
  libxml2-devel libxslt-devel openldap-devel

mkdir bf
cd bf
HERE=`pwd`
wget http://www.pgbuildfarm.org/downloads/latest-client.tgz
mv latest-client.tgz buildfarm-latest-client.tgz
tar -z --strip-components=1 -xf buildfarm-latest-client.tgz 
mkdir root
sed -i "s!build_root =>.*!build_root => '$HERE/root',!" build-farm.conf
At this stage the buildfarm client will run quite successfully, as can be tested using:
perl run_build.pl --test

Monday, August 17, 2015

Quick tips

Found in the last couple of days.

1. If you have more than one active database, and you're using text format logs, make sure you have %d (the database name) in your log line prefix. Not knowing which database the log line comes from can be confusing and makes life more difficult.

2. If you report on a possible database error, report from the Postgres logs, not from your client's logs. We need to see things from the database's point of view. In the case in point, the Postgres logs conatined no relevant errors at all, i.e. this wasn't a Postgres error.

3. Found when trying to resize a data volume on Amazon: certain Linux disk partitioning utilities don't optimally align partitions by default, notably sfdisk and cfdisk. Stay away from those, or make sure you explicitly choose proper alignment for partitions. Programs that do the right thing include fdisk and parted. In general you are probably best to stick to using parted.

Wednesday, July 15, 2015

Dropping columns on partitioned tables.

Say you have a partitioned table and you want to add a column. There is no problem - you just add the column to the parent table, and it is added to all the children. But what if you want to drop a column? Then things are no so straightforward. If the child's column was created before it was inherited then it won't be dropped just by dropping it on the parent. So it very much depends on how the child is set up. If you do:
create table child() inherits (parent);
then dropping a column in the parent drops it in the child too. But if you do:
create table child (like parent);
alter table child inherit parent;
then dropping a column in the parent won't drop it in the child. The pg_partman package follows this second pattern when setting up child partitions, as I discovered yesterday when a client ran into this problem. In this case you have to delete the column from the children yourself. I devised the following snippet of code to accomplish this after you have deleted the column from the parent:
do $$
      declare 
        child oid;
      begin
        for child in
            select inhrelid
            from pg_inherits
            where inhparent = 'parent'::regclass
        loop
            execute 'alter table ' || child::regclass ||
                    ' drop column if exists some_column';
        end loop;
      end;
   $$; 

Friday, June 19, 2015

Rotfang, the random data generator

The other day I gave a lightning talk at pgCon about a Foreign Data Wrapper called Rotfang, which is for generating arbitrary amounts of random data. This is intended for generating test cases. The software supports a small number of data types natively, but also allows you to use user-supplied functions to generate data. It's available on bitbucket. A short example:
CREATE EXTENSION rotfang_fdw;

CREATE FUNCTION random_tstz (typmod int)
RETURNS timestamptz
LANGUAGE SQL as
$$
    SELECT now() - (random() * 20.0) * interval '1 year'
$$;

CREATE FOREIGN TABLE rand2 (b boolean, ts timestamptz)
SERVER rotfang
OPTIONS (maxrows '10', func_ts 'random_tstz');

SELECT * FROM rand2;

Sunday, June 14, 2015

What is a Set type?

Yesterday I proposed an Unconference talk about Set types, and I've had a couple of people ask me about what Set types are and what they would be for, so here is a brief rundown.

Say you have a table of things that you want to be able to apply some set of tags to. It might be blog posts, for example. The tags might be subject classifications, or reader ratings. Let's say for the sake of argument that it's going to be subject classifications, and that we actually have a (large) enum type to standardize that.

The classic way to do this is to have a table of {blog_post_id, tag} which will be unique on the combination. Up to now adding a tag is a nuisance - you need to make sure the tag doesn't already exist or you'll get a uniqueness violation. In 9.5 that's improved with INSERT ... ON CONFLICT IGNORE. Even then it's a bit more work that I really want to do. What I really want is something like this:
UPDATE blog_posts
SET subject_tags = subject_tags + 'my_new_subject'
WHERE post_id = 12345
and then I'll be able to do
SELECT * FROM blog_posts
WHERE subject_tags ? 'some subject'
It's also possible to do this, somewhat inefficiently, by using hstore or jsonb fields (in jsonb you'd just have a top level object). In both cases you would use dummy values - say make everything have a value of 1. But that's ugly, and error prone, and rather hackish.

Now I don't know if there is enough extra value here to justify the work and code maintenance involved. I've managed to live without it all these years without attacking what little hair I have left. But it's something where I have occasionally thought "I could do this so much more simply and elegantly if I had a set type." So I'd like to see if there is enough interest to make it into a real project.

Wednesday, June 3, 2015

Blackhole FDW updated

There have been some new things added to the FDW API in 9.5, so I have just updated the blackhole Foreign Data Wrapper to reflect the changes, so it now has documented skeleton functions for every FDW callback.

Sunday, May 31, 2015

Goodbye jsonb_replace, hello jsonb_set

After a little controversy on the PostgreSQL Hackers mailing list, I got the idea loud and clear that some users weren't happy with the new jsonb features, and in particular the fact that they weren't getting a way to add values to structures nested well inside the jsonb object. Eventually I came up with the idea that we should do this via an enhanced jsonb_replace function. This function gained an additional boolean parameter, defaulting to true, and when it is true the function will not only replace existing items but if the last element of the path designates an object key or array element that is not present, it will be created with the new value. Since this is no longer just replacement of existing values, the function has been renamed jsonb_set. These changes are based in some original code from Dmitry Dolgov, heavily modified by be, and they have just been committed for inclusion in release 9.5. Here are a few examples:
andrew=# select jsonb_set('[]','{0}','"a value"');
  jsonb_set  
-------------
 ["a value"]

andrew=# select jsonb_set('{}','{x}','"a value"',false);
 jsonb_set 
-----------
 {}

andrew=# select jsonb_set('{"f1":{"f2":{"y":1}}}','{f1,f2,x}','"a value"');
                jsonb_set                 
------------------------------------------
 {"f1": {"f2": {"x": "a value", "y": 1}}}


Although this is actually a fairly small change, in terms of code, its impact could be quite big. I believe it will add significantly to our compelling JSON story.

Running pgindent on non-core code, or development code

Running pgindent is not nearly as hard as some people seem to think it is. The hardest part of getting a workable set of typedefs to use. That's why the buildfarm now constructs these lists automatically for each live branch.

But that doesn't help if you're working on non-core code. Here's what I did to get a working typedefs list for the Redis FDW code:
objdump -W redis_fdw.so |\
 egrep -A3 DW_TAG_typedef |\
 perl -e ' while (<>) { chomp; @flds = split;next unless (1 < @flds);\
     next if $flds[0]  ne "DW_AT_name" && $flds[1] ne "DW_AT_name";\
     next if $flds[-1] =~ /^DW_FORM_str/;\
     print $flds[-1],"\n"; }'  |\
 sort | uniq > redis_fdw.typedefs

This is a slight adaptation of what the buildfarm code does on Linux to get a typedefs list.

After that, indenting the code was a matter of just doing this:
pgindent --typedefs=redis_fdw.typedefs redis_fdw.c

What if you're developing a piece of core code and you'd like to run pgindent on it, but you've introduced some new typedefs, so pgindent mucks up the indentation by adding extraneous spaces. You have a couple of options. Let's assume that what you're working on is backend code. Then you could run the above extraction on the built backend - it doesn't have to be installed, just run it against src/backend/postgres. Then use that to run pgindent against each of the files you're working on. You don't have to run it separately for each file - you can name as many files to indent as you like on the command line.

If you do that, look at the results carefully. It's possible that the absence of some platform-dependent typedef has mucked up your file. So a safer procedure is to grab the latest typedefs list from the buildfarm server and combine it with the typedefs list you just constructed, like this:
wget -q -O - "http://www.pgbuildfarm.org/cgi-bin/typedefs.pl?branch=HEAD" |\
 cat - mytypedefs | sort | uniq > mynewtypedefs
and then use that file to pgindent your code.

None of this is as easy as it might be. But none of it is very hard either.

Addendum

If you only have a handful of new typedefs, you can pass them on the command line to pgindent, like this:

pgindent --typedefs=mytypedefs --list-of-typedefs="typedef1 typedef2" myfile1.c myfile2.c


Wednesday, May 20, 2015

Placeholder blues

There has been some debate going on on the hackers list about problems certain drivers have with operators that contain the ? character. Since drivers such as the JDBC driver consider any such character outside of quote marks to be a parameter placeholder, its use as part of an operator makes life difficult for them. Part of the problem has been that we allow almost every available non-alphanumeric ASCII character to be a part of an operator. Why, for example, do we allow `? I've never seen it that I can recall, but the fact means we can't use it for something else (such as marking an operator). That makes doing anything about the problem hard. Some people want to deprecate the use of ? in operators. I think it's far too late for that - its use is far too widespread.

Sunday, May 17, 2015

Yet another data recovery job

One of the things I like about data recovery jobs is that each one is different. Right now I'm wrestling with this problem: how can I recover whatever is possible from a backup that's missing a handful of xlog files. We know this will result in a probably corrupt database, but we still want to see what data is available.

My first two approaches didn't work too well. I tried mangling the control file, and creating dummy xlog files to replace the handful that are missing.

Right now I'm trying to follow a suggestion from Andres Freund to drive the process via the debugger, moving it on to the next available log file when it gets to where it would want the first missing log file.  The trouble is it's hard to find what to set exactly and where to set it (this is Postgres 9.3). Every time I try something I get errors like "unexpected page_addr". If I were more familiar with this section of the code it might be easier, but it's something I've pretty much never dabbled in.

It all makes for an exciting life.

Friday, May 15, 2015

Set types

Every so often I've wanted a SET metatype in Postgres.

We do have bit fields, which can be used as sets with a little work, and this is how MySQL does it's SET types as I understand it. But what if you want a set of enums (our enums can have very large lists of values), or a set of text values, or of integers? Bit fields won't work for these - we would need some sort of sparse representation.

One obvious way would be to store the set as a one-dimensional array, kept in sorted order for efficient searching. For text values, we could probably use some smart form of sorting such as is used by jsonb and hstore to store object keys, to reduce the need for string comparisons.

Obviously, I have done no work on this, but it seems like it would be a worthwhile project.

In the meantime, you can use hstore more or less in this way. Just treat it as a mapping from whatever values you have to 1.  It's a bit of a hack, but it should work. You'll just need to cast your values to text.

Wednesday, May 13, 2015

New jsonb features for 9.5.

Yesterday I committed some new features for jsonb. These are:
  • jsonb_pretty(jsonb) produces nicely indented json output.
  • jsonb || jsonb concatenates two jsonb values.
  • jsonb - text removes a key and its associated value, or a matching array element, from the json
  • jsonb - int removes the designated array element
  • jsonb - text[] removes a key and associated value or array element at the designated path
  • jsonb_replace(jsonb,text[],jsonb) replaces the array element designated by the path or the value associated with the key designated by the path with the given value.
This was based Dmitry Dolgov's jsonbx extension for 9.4, which in turn was based on earlier work by Oleg Bartunov, Teodor Sigaev and Alexander Korotkov on nested hstore. Petr Jelinek also did some useful review and tidy up work on the core features. So, a good team effort.

Tuesday, May 5, 2015

Reminder: checking Windows builds is incredibly easy.

Most PostgreSQL developers don't use Windows, and many fairly openly regard the Windows port as a nuisance. But we've had it for more than a decade, now, and it's long since time we got over that. And checking that at least your changes compile is easy. It doesn't need a Windows box at all - Postgres builds for Windows quite happily via a cross-compiler. Here's exactly what I did on a Fedora 20 machine this morning:
sudo yum install mingw64-gcc mingw64-zlib
git clone git://git.postgresql.org/git/postgresql.git
cd postgresql
./configure --host=x86_64-w64-mingw32
make world
And it just worked. I tested it on Windows, and it might even be possible to test it using WINE, but even without testing it this procedure is useful to see if there are compilation errors.

I assume all this could be (and possibly has been) built for other platforms such as OSX and FreeBSD. If not, it might be worth putting together some packages for those platforms - it shouldn't be too hard.

Saturday, April 18, 2015

Buildfarm bug fix for MSVC builds

Unfortunately there was a small bug in yesterday's buildfarm client release. The bug only affects MSVC builds, which would fail silently on the HEAD (master) branch.

There is a bug fix release available at http://www.pgbuildfarm.org/downloads/releases/build-farm-4_15_1.tgz or you can just pick up the fixed version of run_build.pl (the only thing changed) at https://raw.githubusercontent.com/PGBuildFarm/client-code/b80efc68c35ef8a1ced37b57b3d19a98b8ae5dd2/run_build.pl

Sorry for the inconvenience.

Friday, April 17, 2015

New PostgreSQL Buildfarm Client Release

I have just released version 4.15 of the PostgreSQL Buildfarm Client. Here's what's changed:
  • support the new location for pg_upgrade
  • support running tests of client programs
  • support building, installing and running testmodules
  • use a default ccache directory
  • improve logging when running pg_upgrade tests
  • handle odd location of Python3 regression tests
  • add timestamp to default log_line_prefix
  • make qw() errors in the config file fatal (helps detect errors)
  • minor bug fixes for web script settings.
  • allow for using linked git directories in non-master branches
The last item might need a little explanation.  Essentially this can reduce quite dramatically the amount of space required if you are building on more than one branch. Instead of keeping, say, 6 checked out repos for the current six tested branches, we keep one and link all the others to it. This works almost exactly the way git-new-workdir does (I stole the logic from there). This doesn't work in a couple of situations: if you are using Windows or if you are using git-reference. In these cases the new setting is simply ignored.

To enable this new setting in an existing installation, do the following after installing the new release:
  • in your config file, add this setting:
    git_use_workdirs => 1,
  • remove the pgsql directory in each branch directory other than HEAD
Another good thing to do in existing installations would be to add "%m" to the beginning of the log_line_prefix setting in extra_config stanza.

Enjoy!

Wednesday, April 15, 2015

Hot Fix for buildfarm client, currently broken by pg_upgrade move

Yesterday the pg_upgrade program was moved from contrib to bin in the source tree. Unfortunately this broke most of those buildfarm members which check pg_upgrade. There is a hot fix for the TestUpgrade buildfarm module that can be downloaded from github. I will work on cutting a new buildfarm release in the next few days, but this file can just be dropped in place on existing installations.

Tuesday, April 7, 2015

Dynamically disabling triggers without locks

Recently Simon Riggs committed a patch by himself and Andreas Karlsson to reduce the lock strength required by certain ALTER TABLE commands, including those to enable or disable triggers. Now the lock level required is SHARE ROW EXCLUSIVE instead of ACCESS EXCLUSIVE. That means it doesn't block SELECT commands any more, and isn't blocked by them, although it will still block and be blocked by INSERT, UPDATE and DELETE operations. Very nice.

However, without formally disabling a trigger you can tell it dynamically not to do anything in the current session without taking any locks at all. Here's a little bit of PLpgsql code I wrote recently for this sort of operation in an INSERT trigger:
    begin
        disabled := current_setting('mypackage.foo_trigger_disabled');
    exception
        when others then disabled := 'false';
    end;
    if disabled = 'true' then
       return NEW;
    end if;
Note that this will only block the trigger from doing anything in sessions where this variable is set. But that's often exactly what you want. In the case this was written for, the trigger is redundant (and expensive) for certain bulk operations, but required for normal operations.  So in a session where we are performing the bulk operation, we can simply set this and avoid taking out a heavy lock on the table, and do this instead, before running our bulk operation:
    set mypackage.foo_trigger_disabled = 'true';
The code above is a bit ugly because of the requirement for the exception handler. There's a cure for that coming, too. David Christensen has submitted a patch to provide a form of current_setting() which will return NULL for unset variables instead of raising an exception.

Note, too, that you could use a value in a one-row one-column table if you wanted something that could apply in all sessions, not just the current session. It would be a bit less efficient, though. This mechanism is pretty light-weight.

Monday, April 6, 2015

Fun with Raspberry Pi 2 and the buildfarm

Here's a picture of my two Raspberry Pi 2 boxes, both running headless and wireless.


One is running Raspbian, installed via NOOBS, and the other Fidora, a remix of Fedora 21 for Raspberry Pi 2. It turned out that Pidora doesn't work on the Raspberry Pi 2, a fact that is extremely well hidden on the Raspberry Pi web site.

I have set up test buildfarm animals on both of these. But something odd is happening. They are both getting intermittent failures of the stats regression test. Sometimes it happens during "make check", sometimes during "make installcheck" and sometimes during testing of pg_upgrade (which in turn runs "make installcheck").

These machines are not anything like speed demons. Far from it. But we also run other slow machines without getting this happening all the time. So I'm a bit perplexed about what might be going on.

Incidentally, if you want to play with one of these, I do recommend getting a starter kit from Amazon or elsewhere. It's probably cheaper than buying everything separately, and gets you everything you need to get started. Well worth the $69.99.

Monday, March 30, 2015

Testing patches with a couple of commands using a buildfarm animal

I've blogged before about how the buildfarm client software can be useful for developers amd reviewers. Yesterday was a perfect example. I was testing a set of patches for a bug fix for pg_upgrade running on Windows, and they go all the way back to the 9.0 release. The simplest way to test these was using a buildfarm animal. On jacana, I applied the relevant patch in each branch repo, and then simply did this to build and test them all:

for f in root/[RH]* ; do 
  br=`basename $f`
  perl ./run_build.pl --from-source=`pwd`/$f/pgsql --config=jacana.conf --verbose $br
done

After it was all done and everything worked, I cleaned up the git repositories so they were ready for more buildfarm runs:

for f in root/[RH]* ; do 
  pushd $f/pgsql
  git reset --hard
  git clean -dfxq
  popd
done

Pretty simple! The commands are shown here on multiple lines for clarity, but in fact I wrote each set on one line, so after applying the patches the whole thing took 2 lines. (Because jacana only builds back to release 9.2, I had to repeat the process on frogmouth for 9.0 and 9.1, using the same process).

Wednesday, March 4, 2015

new pg_partman release

Keith Fiske's pg_partman is a pretty nice tool for managing partitions of tables. I've recommended it recently to a couple of clients, and it's working well for them.

Based on that I have made a couple of suggestions for improvement, and today he's made a release including one of them. Previously, the script to rebuild the child indexes essentially removed them all and rebuilt them. Now it only removes those that are no longer on the parent table, and only adds those that are on the parent but not on the child, so if you just add or delete one index on the parent that's all that gets done on the children too.

I'm happy to say that he's also working on my other, more significant suggestion, which is to have a hybrid partitioning mode where the trigger has static inserts for the likely common tables and dynamic inserts for the rest. That will mean you don't have to make nasty choices between flexibility and speed. I'm looking forward to it.

Wednesday, February 25, 2015

Stopping expensive queries before they start

Today we're releasing a code for a small PostgreSQL module called plan_filter that lets you stop queries from even starting if they meet certain criteria. Currently the module implements one such criterion: the estimated cost of the query.

After you have built and installed it, you add a couple of settings to the postgresql.conf file, like this:
shared_preload_libraries = 'plan_filter'
plan_filter.statement_cost_limit = 100000.0
Then if the planner estimates the cost as higher than the statement_cost_limit it will raise an error rather than allowing the query to run.

This module follows an idea from a discussion on the postgresql-hackers mailing list some time ago. It was developed by PostgreSQL Experts Inc for our client Twitch.TV, who have generously allowed us to make it publicly available.

Raspberry Pi 2 coming to the buildfarm

Yesterday I ordered a Raspberry Pi 2 Model B, and it should be delivered in a few days. I'm intending to set it up as a buildfarm member. The fact that you can purchase a general purpose computer the size of a credit card with  a quad-core processor and 1Gb of memory (I remember when RAM was counted in kilobytes) and all for USD35.00 is amazing, even when you remember Moore's Law.

Saturday, February 14, 2015

Statistics and ordering operations on JSON fields

The JSON type is not provided with any comparison operators built in, unlike the new JSONB type. One reason for this is that it's not really clear how to do the comparison. In particular, comparing the text values seems wrong, because the white space should not matter, and arguably the order of object keys should not matter either. JSONB doesn't have these problems because it dissolves all the white space and stores object keys in a canonical order, so we have come up with sane if slightly counter-intuitive comparison operations.

This limitation on JSON is somewhat irksome, however. It restricts you from doing some operations on JSON such as DISTINCT, GROUP BY, and ORDER BY.

Another issue is that it causes ANALYZE not to create any rows at all in pg_statistic for JSON columns, so even if all you want to know is the average column width, you can't find it. That makes doing things like measuring table bloat just about impossible.

If you have PLV8 available, you can create operators that work fairly sanely on JSON and that let you generate stats, use DISTINCT etc. The steps are outlined here.

But what if you don't want to load PLV8 just for this? Or what of you can't, like say on a managed service that doesn't provide it? All is not lost.  Here is a version which uses text comparison instead of a PLV8 function.  That means you don't need to have PLV8 loaded. Since it uses text comparison, it is subject to the caveats mentioned about about white space and object keys. But it will work, and you will see rows for the column in pg_statistic. Just be careful using ordering operations or creating indexes, as the results, while consistent, might be surprising.

Monday, February 9, 2015

Moving PLV8 to ECMAScript6

Taras Mitran has just published a blog post about using some of the advanced features of ECMAScript version 6 to make programming with SQL in JavaScript nicer.

He notes that Template Strings allow for multiline strings, which is in itself a significant advance, and that you can also have Tagged Template Strings which can transform the Template String automagically in many interesting ways. His example turns a Template String with embedded variables into a preparable query with a list of parameters. It's very nifty, quite clever in fact.

The other feature that I would really like is proper lexically scoped variables. Perl got these donkeys years ago, and their absence this long in JavaScript has been reprehensible. They are provided for in ES 6.

My understanding is that to get these features we need to provide for PLV8 to build against the latest version of V8 (and possibly provide some initialization flags too.) Unfortunately, the V8 API seems to have changed significantly since the 3.14.5.10 that's available on my Fedora 20 workstation, so enabling it to build with, say, V8 version 4.1.0.14, which is what io.js is using, will take quite a bit of work, probably by someone whose C++-fu is greater than mine.

Tuesday, February 3, 2015

New release of PLV8

I have released a new version of PLV8, which now builds on PostgreSQL 9.4, as well as containing a number of bug fixes.

It can be downloaded at http://pgxn.org/dist/plv8

Enjoy