parent, not only those with RangeTblRefs. We need them in ExecCheckRTPerms.
Report by Brendan O'Shea. Back-patch to 8.2, where pull_up_simple_union_all
was introduced.
namespace isn't necessarily first in the search path (there could be implicit
schemas ahead of it). Examples are
test=# set search_path TO s1;
test=# create view pg_timezone_names as select * from pg_timezone_names();
ERROR: "pg_timezone_names" is already a view
test=# create table pg_class (f1 int primary key);
ERROR: permission denied: "pg_class" is a system catalog
You'd expect these commands to create the requested objects in s1, since
names beginning with pg_ aren't supposed to be reserved anymore. What is
happening is that we create the requested base table and then execute
additional commands (here, CREATE RULE or CREATE INDEX), and that code is
passed the same RangeVar that was in the original command. Since that
RangeVar has schemaname = NULL, the secondary commands think they should do a
path search, and that means they find system catalogs that are implicitly in
front of s1 in the search path.
This is perilously close to being a security hole: if the secondary command
failed to apply a permission check then it'd be possible for unprivileged
users to make schema modifications to system catalogs. But as far as I can
find, there is no code path in which a check doesn't occur. Which makes it
just a weird corner-case bug for people who are silly enough to want to
name their tables the same as a system catalog.
The relevant code has changed quite a bit since 8.2, which means this patch
wouldn't work as-is in the back branches. Since it's a corner case no one
has reported from the field, I'm not going to bother trying to back-patch.
rules to be defined with different, per session controllable, behaviors
for replication purposes.
This will allow replication systems like Slony-I and, as has been stated
on pgsql-hackers, other products to control the firing mechanism of
triggers and rewrite rules without modifying the system catalog directly.
The firing mechanisms are controlled by a new superuser-only GUC
variable, session_replication_role, together with a change to
pg_trigger.tgenabled and a new column pg_rewrite.ev_enabled. Both
columns are a single char data type now (tgenabled was a bool before).
The possible values in these attributes are:
'O' - Trigger/Rule fires when session_replication_role is "origin"
(default) or "local". This is the default behavior.
'D' - Trigger/Rule is disabled and fires never
'A' - Trigger/Rule fires always regardless of the setting of
session_replication_role
'R' - Trigger/Rule fires when session_replication_role is "replica"
The GUC variable can only be changed as long as the system does not have
any cached query plans. This will prevent changing the session role and
accidentally executing stored procedures or functions that have plans
cached that expand to the wrong query set due to differences in the rule
firing semantics.
The SQL syntax for changing a triggers/rules firing semantics is
ALTER TABLE <tabname> <when> TRIGGER|RULE <name>;
<when> ::= ENABLE | ENABLE ALWAYS | ENABLE REPLICA | DISABLE
psql's \d command as well as pg_dump are extended in a backward
compatible fashion.
Jan
module and teach PREPARE and protocol-level prepared statements to use it.
In service of this, rearrange utility-statement processing so that parse
analysis does not assume table schemas can't change before execution for
utility statements (necessary because we don't attempt to re-acquire locks
for utility statements when reusing a stored plan). This requires some
refactoring of the ProcessUtility API, but it ends up cleaner anyway,
for instance we can get rid of the QueryContext global.
Still to do: fix up SPI and related code to use the plan cache; I'm tempted to
try to make SQL functions use it too. Also, there are at least some aspects
of system state that we want to ensure remain the same during a replan as in
the original processing; search_path certainly ought to behave that way for
instance, and perhaps there are others.
can create or modify rules for the table. Do setRuleCheckAsUser() while
loading rules into the relcache, rather than when defining a rule. This
ensures that permission checks for tables referenced in a rule are done with
respect to the current owner of the rule's table, whereas formerly ALTER TABLE
OWNER would fail to update the permission checking for associated rules.
Removal of separate RULE privilege is needed to prevent various scenarios
in which a grantee of RULE privilege could effectively have any privilege
of the table owner. For backwards compatibility, GRANT/REVOKE RULE is still
accepted, but it doesn't do anything. Per discussion here:
http://archives.postgresql.org/pgsql-hackers/2006-04/msg01138.php
RTE of interest, rather than the whole rangetable list. This makes
the API more understandable and avoids duplicate RTE lookups. This
patch reverts no-longer-needed portions of my patch of 2004-08-19.
performance problem pointed out by phil@vodafone: to wit, we were
spending O(N^2) time to check dropped-ness in an N-deep join tree,
even in the case where the tree was freshly constructed and couldn't
possibly mention any dropped columns. Instead of recursing in
get_rte_attribute_is_dropped(), change the data structure definition:
the joinaliasvars list of a JOIN RTE must have a NULL Const instead
of a Var at any position that references a now-dropped column. This
costs nothing during normal parse-rewrite-plan path, and instead we
have a linear-time update to make when loading a stored rule that
might contain now-dropped columns. While at it, move the responsibility
for acquring locks on relations referenced by rules into this separate
function (which I therefore chose to call AcquireRewriteLocks).
This saves effort --- namely, duplicated lock grabs in parser and rewriter
--- in the normal path at a cost of one extra non-locked heap_open()
in the stored-rule path; seems a good tradeoff. A fringe benefit is
that it is now *much* clearer that we acquire lock on relations referenced
in rules before we make any rewriter decisions based on their properties.
(I don't know of any bug of that ilk, but it wasn't exactly clear before.)
Formerly, if such a clause contained no aggregate functions we mistakenly
treated it as equivalent to WHERE. Per spec it must cause the query to
be treated as a grouped query of a single group, the same as appearance
of aggregate functions would do. Also, the HAVING filter must execute
after aggregate function computation even if it itself contains no
aggregate functions.
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
presence of dropped columns. Document the already-presumed fact that
eref aliases in relation RTEs are supposed to have entries for dropped
columns; cause the user alias structs to have such entries too, so that
there's always a one-to-one mapping to the underlying physical attnums.
Adjust expandRTE() and related code to handle the case where a column
that is part of a JOIN has been dropped. Generalize expandRTE()'s API
so that it can be used in a couple of places that formerly rolled their
own implementation of the same logic. Fix ruleutils.c to suppress
display of aliases for columns that were dropped since the rule was made.
rather than allowing them only in a few special cases as before. In
particular you can now pass a ROW() construct to a function that accepts
a rowtype parameter. Internal generation of RowExprs fixes a number of
corner cases that used to not work very well, such as referencing the
whole-row result of a JOIN or subquery. This represents a further step in
the work I started a month or so back to make rowtype values into
first-class citizens.
whose conditions might yield NULL. The negated qual to attach to the
original query is properly 'x IS NOT TRUE', not 'NOT x'. This fix
produces correct behavior, but we may be taking a performance hit because
the planner is much stupider about IS NOT TRUE than it is about NOT
clauses. Future TODO: teach prepqual, other parts of planner how to
cope with BooleanTest clauses more effectively.
COPY x (a,d,c,b) from stdin;
COPY x (a,c) to stdout;
as well as the corresponding changes to pg_dump to use the new
functionality. This functionality is not available when using
the BINARY option. If a column is not specified in the COPY FROM
statement, its default values will be used.
In addition to this functionality, I tweaked a couple of the
error messages emitted by the new COPY <options> checks.
Brent Verner
pg_relcheck is gone; CHECK, UNIQUE, PRIMARY KEY, and FOREIGN KEY
constraints all have real live entries in pg_constraint. pg_depend
exists, and RESTRICT/CASCADE options work on most kinds of DROP;
however, pg_depend is not yet very well populated with dependencies.
(Most of the ones that are present at this point just replace formerly
hardwired associations, such as the implicit drop of a relation's pg_type
entry when the relation is dropped.) Need to add more logic to create
dependency entries, improve pg_dump to dump constraints in place of
indexes and triggers, and add some regression tests.
DROP RULE and COMMENT ON RULE syntax adds an 'ON tablename' clause,
similar to TRIGGER syntaxes. To allow loading of existing pg_dump
files containing COMMENT ON RULE, the COMMENT code will still accept
the old syntax --- but only if the target rulename is unique across
the whole database.
objects to be privilege-checked. Some change in their APIs would be
necessary no matter what in the schema environment, and simply getting
rid of the name-based interface entirely seems like the best way.
report from Joel Burton. Turns out that my simple idea of turning the
SELECT into a subquery does not interact well *at all* with the way the
rule rewriter works. Really what we need to make INSERT ... SELECT work
cleanly is to decouple targetlists from rangetables: an INSERT ... SELECT
wants to have two levels of targetlist but only one rangetable. No time
for that for 7.1, however, so I've inserted some ugly hacks to make the
rewriter know explicitly about the structure of INSERT ... SELECT queries.
Ugh :-(
maintained for each cache entry. A cache entry will not be freed until
the matching ReleaseSysCache call has been executed. This eliminates
worries about cache entries getting dropped while still in use. See
my posting to pg-hackers of even date for more info.
(Don't forget that an alias is required.) Views reimplemented as expanding
to subselect-in-FROM. Grouping, aggregates, DISTINCT in views actually
work now (he says optimistically). No UNION support in subselects/views
yet, but I have some ideas about that. Rule-related permissions checking
moved out of rewriter and into executor.
INITDB REQUIRED!
entry that has rules. This allows us to release the rule parsetrees
on relcache flush without needing a working freeObject() routine.
Formerly, the rule trees were leaked permanently at relcache flush.
Also, clean up handling of rule creation and deletion --- there was
not sufficient locking of the relation being modified, and there was
no reliable notification of other backends that a relcache reload
was needed. Also, clean up relcache.c code so that scans of system
tables needed to load a relcache entry are done in the caller's
memory context, not in CacheMemoryContext. This prevents any
un-pfreed memory from those scans from becoming a permanent memory
leak.
mark query as having subselects if a subselect was added from a rule
WHERE condition (as opposed to a rule action). Also, fix adjustment
of varlevelsup so that it actually has some prospect of working when
inserting an expression containing a subselect into a subquery.
expression_tree_mutator rather than ad-hoc tree walking code. This shortens
the code materially and fixes a fair number of sins of omission. Also,
change modifyAggrefQual to *not* recurse into subselects, since its mission
is satisfied if it removes aggregate functions from the top level of a
WHERE clause. This cures problems with queries of the form SELECT ...
WHERE x IN (SELECT ... HAVING something-using-an-aggregate), which would
formerly get mucked up by modifyAggrefQual. The routine is still
fundamentally broken, of course, but I don't think there's any way to get
rid of it before we implement subselects in FROM ...