Rules versus Triggers
13.7. Rules versus Triggers
Many things that can be done using triggers can also be implemented using the PostgreSQL rule system. What currently cannot be implemented by rules are some kinds of constraints. It is possible, to place a qualified rule that rewrites a query to NOTHING if the value of a column does not appear in another table. But then the data is silently thrown away and that's not a good idea. If checks for valid values are required, and in the case of an invalid value an error message should be generated, it must be done by a trigger for now.
On the other hand a trigger that is fired on INSERT on a view can do the same as a rule, put the data somewhere else and suppress the insert in the view. But it cannot do the same thing on UPDATE or DELETE, because there is no real data in the view relation that could be scanned and thus the trigger would never get called. Only a rule will help.
For the things that can be implemented by both, it depends on the usage of the database, which is the best. A trigger is fired for any row affected once. A rule manipulates the parse tree or generates an additional one. So if many rows are affected in one statement, a rule issuing one extra query would usually do a better job than a trigger that is called for any single row and must execute his operations this many times.
For example: There are two tables
CREATE TABLE computer ( hostname text, -- indexed manufacturer text -- indexed ); CREATE TABLE software ( software text, -- indexed hostname text -- indexed );
Both tables have many thousands of rows and the index on hostname is unique. The hostname column contains the full qualified domain name of the computer. The rule/trigger should constraint delete rows from software that reference the deleted host. Since the trigger is called for each individual row deleted from computer, it can use the statement
DELETE FROM software WHERE hostname = $1;
in a prepared and saved plan and pass the hostname in the parameter. The rule would be written as
CREATE RULE computer_del AS ON DELETE TO computer DO DELETE FROM software WHERE hostname = OLD.hostname;
Now we look at different types of deletes. In the case of a
DELETE FROM computer WHERE hostname = 'mypc.local.net';
the table computer is scanned by index (fast) and the query issued by the trigger would also be an index scan (fast too). The extra query from the rule would be a
DELETE FROM software WHERE computer.hostname = 'mypc.local.net' AND software.hostname = computer.hostname;
Since there are appropriate indexes setup, the planner will create a plan of
Nestloop -> Index Scan using comp_hostidx on computer -> Index Scan using soft_hostidx on software
So there would be not that much difference in speed between the trigger and the rule implementation. With the next delete we want to get rid of all the 2000 computers where the hostname starts with 'old'. There are two possible queries to do that. One is
DELETE FROM computer WHERE hostname >= 'old' AND hostname < 'ole'
Where the plan for the rule query will be a
Hash Join -> Seq Scan on software -> Hash -> Index Scan using comp_hostidx on computer
The other possible query is a
DELETE FROM computer WHERE hostname ~ '^old';
with the execution plan
Nestloop -> Index Scan using comp_hostidx on computer -> Index Scan using soft_hostidx on software
This shows, that the planner does not realize that the qualification for the hostname on computer could also be used for an index scan on software when there are multiple qualification expressions combined with AND, what he does in the regexp version of the query. The trigger will get invoked once for any of the 2000 old computers that have to be deleted and that will result in one index scan over computer and 2000 index scans for the software. The rule implementation will do it with two queries over indexes. And it depends on the overall size of the software table if the rule will still be faster in the sequential scan situation. 2000 query executions over the SPI manager take some time, even if all the index blocks to look them up will soon appear in the cache.
The last query we look at is a
DELETE FROM computer WHERE manufacurer = 'bim';
Again this could result in many rows to be deleted from computer. So the trigger will again fire many queries into the executor. But the rule plan will again be the nested loop over two index scans. Only using another index on computer:
Nestloop -> Index Scan using comp_manufidx on computer -> Index Scan using soft_hostidx on software
resulting from the rules query
DELETE FROM software WHERE computer.manufacurer = 'bim' AND software.hostname = computer.hostname;
In any of these cases, the extra queries from the rule system will be more or less independent from the number of affected rows in a query.
Another situation is cases on UPDATE where it depends on the change of an attribute if an action should be performed or not. In PostgreSQL version 6.4, the attribute specification for rule events is disabled (it will have its comeback latest in 6.5, maybe earlier - stay tuned). So for now the only way to create a rule as in the shoelace_log example is to do it with a rule qualification. That results in an extra query that is performed always, even if the attribute of interest cannot change at all because it does not appear in the target list of the initial query. When this is enabled again, it will be one more advantage of rules over triggers. Optimization of a trigger must fail by definition in this case, because the fact that its actions will only be done when a specific attribute is updated is hidden in its functionality. The definition of a trigger only allows to specify it on row level, so whenever a row is touched, the trigger must be called to make its decision. The rule system will know it by looking up the target list and will suppress the additional query completely if the attribute isn't touched. So the rule, qualified or not, will only do its scans if there ever could be something to do.
Rules will only be significantly slower than triggers if their actions result in large and bad qualified joins, a situation where the planner fails. They are a big hammer. Using a big hammer without caution can cause big damage. But used with the right touch, they can hit any nail on the head.