Commit Graph

3414 Commits

Author SHA1 Message Date
01256815a6 Use checkAsUser for selectivity estimator checks, if it's set.
In examine_variable() and examine_simple_variable(), when checking the
user's table and column privileges to determine whether to grant
access to the pg_statistic data, use checkAsUser for the privilege
checks, if it's set. This will be the case if we're accessing the
table via a view, to indicate that we should perform privilege checks
as the view owner rather than the current user.

This change makes this planner check consistent with the check in the
executor, so the planner will be able to make use of statistics if the
table is accessible via the view. This fixes a performance regression
introduced by commit e2d4ef8de8, which affects queries against
non-security barrier views in the case where the user doesn't have
privileges on the underlying table, but the view owner does.

Note that it continues to provide the same safeguards controlling
access to pg_statistic for direct table access (in which case
checkAsUser won't be set) and for security barrier views, because of
the nearby checks on rte->security_barrier and rte->securityQuals.

Back-patch to all supported branches because e2d4ef8de8 was.

Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.
2019-05-06 12:01:44 +01:00
93c36145ad Fix security checks for selectivity estimation functions with RLS.
In commit e2d4ef8de8, security checks were added to prevent
user-supplied operators from running over data from pg_statistic
unless the user has table or column privileges on the table, or the
operator is leakproof. For a table with RLS, however, checking for
table or column privileges is insufficient, since that does not
guarantee that the user has permission to view all of the column's
data.

Fix this by also checking for securityQuals on the RTE, and insisting
that the operator be leakproof if there are any. Thus the
leakproofness check will only be skipped if there are no securityQuals
and the user has table or column privileges on the table -- i.e., only
if we know that the user has access to all the data in the column.

Back-patch to 9.5 where RLS was added.

Dean Rasheed, reviewed by Jonathan Katz and Stephen Frost.

Security: CVE-2019-10130
2019-05-06 11:46:33 +01:00
8439589bc7 Remove reindex_catalog test from test schedules.
As the test currently causes occasional deadlocks (due to the schema
cleanup from previous sessions potentially still running), and the
patch from f912d7dec2 has gotten a fair bit of buildfarm coverage,
remove the test from the test schedules. There's a set of minor
releases coming up.

Leave the tests in place, so it can manually be run using EXTRA_TESTS.

For now also leave it in master, as there's no imminent release, and
there's plenty (re-)index related work in 12. But we'll have to
disable it before long there too, unless somebody comes up with simple
enough fixes for the deadlock (I'm about to post a vague idea to the
list).

Discussion: https://postgr.es/m/4622.1556982247@sss.pgh.pa.us
Backpatch: 9.4-11 (no master!)
2019-05-05 23:35:01 -07:00
3630c6bfad Run catalog reindexing test from 3dbb317d32 serially, to avoid deadlocks.
The tests turn out to cause deadlocks in some circumstances. Fairly
reproducibly so with -DRELCACHE_FORCE_RELEASE
-DCATCACHE_FORCE_RELEASE.  Some of the deadlocks may be hard to fix
without disproportionate measures, but others probably should be fixed
- but not in 12.

We discussed removing the new tests until we can fix the issues
underlying the deadlocks, but results from buildfarm animal
markhor (which runs with CLOBBER_CACHE_ALWAYS) indicates that there
might be a more severe, as of yet undiagnosed, issue (including on
stable branches) with reindexing catalogs. The failure is:
ERROR: could not read block 0 in file "base/16384/28025": read only 0 of 8192 bytes
Therefore it seems advisable to keep the tests.

It's not certain that running the tests in isolation removes the risk
of deadlocks. It's possible that additional locks are needed to
protect against a concurrent auto-analyze or such.

Per discussion with Tom Lane.

Discussion: https://postgr.es/m/28926.1556664156@sss.pgh.pa.us
Backpatch: 9.4-, like 3dbb317d3
2019-04-30 17:45:32 -07:00
670a492c5f Fix potential assertion failure when reindexing a pg_class index.
When reindexing individual indexes on pg_class it was possible to
either trigger an assertion failure:
TRAP: FailedAssertion("!(!ReindexIsProcessingIndex(((index)->rd_id)))

That's because reindex_index() called SetReindexProcessing() - which
enables an asserts ensuring no index insertions happen into the index
- before calling RelationSetNewRelfilenode(). That not correct for
indexes on pg_class, because RelationSetNewRelfilenode() updates the
relevant pg_class row, which needs to update the indexes.

The are two reasons this wasn't noticed earlier. Firstly the bug
doesn't trigger when reindexing all of pg_class, as reindex_relation
has code "hiding" all yet-to-be-reindexed indexes. Secondly, the bug
only triggers when the the update to pg_class doesn't turn out to be a
HOT update - otherwise there's no index insertion to trigger the
bug. Most of the time there's enough space, making this bug hard to
trigger.

To fix, move RelationSetNewRelfilenode() to before the
SetReindexProcessing() (and, together with some other code, to outside
of the PG_TRY()).

To make sure the error checking intended by SetReindexProcessing() is
more robust, modify CatalogIndexInsert() to check
ReindexIsProcessingIndex() even when the update is a HOT update.

Also add a few regression tests for REINDEXing of system catalogs.

The last two improvements would have prevented some of the issues
fixed in 5c1560606dc4c from being introduced in the first place.

Reported-By: Michael Paquier
Diagnosed-By: Tom Lane and Andres Freund
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20190418011430.GA19133@paquier.xyz
Backpatch: 9.4-, the bug is present in all branches
2019-04-29 19:43:32 -07:00
74589026f7 Repair assorted issues in locale data extraction.
cache_locale_time (extraction of LC_TIME-related info) had never been
taught the lessons we previously learned about extraction of info related
to LC_MONETARY and LC_NUMERIC.  Specifically, commit 95a777c61 taught
PGLC_localeconv() that data coming out of localeconv() was in an encoding
determined by the relevant locale, but we didn't realize that there's a
similar issue with strftime().  And commit a4930e7ca hardened
PGLC_localeconv() against errors occurring partway through, but failed
to do likewise for cache_locale_time().  So, rearrange the latter
function to perform encoding conversion and not risk failure while
it's got the locales set to temporary values.

This time around I also changed PGLC_localeconv() to treat it as FATAL
if it can't restore the previous settings of the locale values.  There
is no reason (except possibly OOM) for that to fail, and proceeding with
the wrong locale values seems like a seriously bad idea --- especially
on Windows where we have to also temporarily change LC_CTYPE.  Also,
protect against the possibility that we can't identify the codeset
reported for LC_MONETARY or LC_NUMERIC; rather than just failing,
try to validate the data without conversion.

The user-visible symptom this fixes is that if LC_TIME is set to a locale
name that implies an encoding different from the database encoding,
non-ASCII localized day and month names would be retrieved in the wrong
encoding, leading to either unexpected encoding-conversion error reports
or wrong output from to_char().  The other possible failure modes are
unlikely enough that we've not seen reports of them, AFAIK.

The encoding conversion problems do not manifest on Windows, since
we'd already created special-case code to handle that issue there.

Per report from Juan José Santamaría Flecha.  Back-patch to all
supported versions.

Juan José Santamaría Flecha and Tom Lane

Discussion: https://postgr.es/m/CAC+AXB22So5aZm2vZe+MChYXec7gWfr-n-SK-iO091R0P_1Tew@mail.gmail.com
2019-04-23 18:51:31 -04:00
2f78974e29 Fix improper interaction of FULL JOINs with lateral references.
join_is_legal() needs to reject forming certain outer joins in cases
where that would lead the planner down a blind alley.  However, it
mistakenly supposed that the way to handle full joins was to treat them
as applying the same constraints as for left joins, only to both sides.
That doesn't work, as shown in bug #15741 from Anthony Skorski: given
a lateral reference out of a join that's fully enclosed by a full join,
the code would fail to believe that any join ordering is legal, resulting
in errors like "failed to build any N-way joins".

However, we don't really need to consider full joins at all for this
purpose, because we effectively force them to be evaluated in syntactic
order, and that order is always legal for lateral references.  Hence,
get rid of this broken logic for full joins and just ignore them instead.

This seems to have been an oversight in commit 7e19db0c0.
Back-patch to all supported branches, as that was.

Discussion: https://postgr.es/m/15741-276f1f464b3f40eb@postgresql.org
2019-04-08 16:09:07 -04:00
2e606d0ad7 Perform RLS subquery checks as the right user when going via a view.
When accessing a table with RLS via a view, the RLS checks are
performed as the view owner. However, the code neglected to propagate
that to any subqueries in the RLS checks. Fix that by calling
setRuleCheckAsUser() for all RLS policy quals and withCheckOption
checks for RTEs with RLS.

Back-patch to 9.5 where RLS was added.

Per bug #15708 from daurnimator.

Discussion: https://postgr.es/m/15708-d65cab2ce9b1717a@postgresql.org
2019-04-02 08:22:48 +01:00
da45927cef Accept XML documents when xmloption = content, as required by SQL:2006+.
Previously we were using the SQL:2003 definition, which doesn't allow
this, but that creates a serious dump/restore gotcha: there is no
setting of xmloption that will allow all valid XML data.  Hence,
switch to the 2006 definition.

Since libxml doesn't accept <!DOCTYPE> directives in the mode we
use for CONTENT parsing, the implementation is to detect <!DOCTYPE>
in the input and switch to DOCUMENT parsing mode.  This should not
cost much, because <!DOCTYPE> should be close to the front of the
input if it's there at all.  It's possible that this causes the
error messages for malformed input to be slightly different than
they were before, if said input includes <!DOCTYPE>; but that does
not seem like a big problem.

In passing, buy back a few cycles in parsing of large XML documents
by not doing strlen() of the whole input in parse_xml_decl().

Back-patch because dump/restore failures are not nice.  This change
shouldn't break any cases that worked before, so it seems safe to
back-patch.

Chapman Flack (revised a bit by me)

Discussion: https://postgr.es/m/CAN-V+g-6JqUQEQZ55Q3toXEN6d5Ez5uvzL4VR+8KtvJKj31taw@mail.gmail.com
2019-03-23 16:24:30 -04:00
71288846ef Ensure dummy paths have correct required_outer if rel is parameterized.
The assertions added by commits 34ea1ab7f et al found another problem:
set_dummy_rel_pathlist and mark_dummy_rel were failing to label
the dummy paths they create with the correct outer_relids, in case
the relation is necessarily parameterized due to having lateral
references in its tlist.  It's likely that this has no user-visible
consequences in production builds, at the moment; but still an assertion
failure is a bad thing, so back-patch the fix.

Per bug #15694 from Roman Zharkov (via Alexander Lakhin)
and an independent report by Tushar Ahuja.

Discussion: https://postgr.es/m/15694-74f2ca97e7044f7f@postgresql.org
Discussion: https://postgr.es/m/7d72ab20-c725-3ce2-f99d-4e64dd8a0de6@enterprisedb.com
2019-03-14 12:16:10 -04:00
d8f8183c04 Disallow NaN as a value for floating-point GUCs.
None of the code that uses GUC values is really prepared for them to
hold NaN, but parse_real() didn't have any defense against accepting
such a value.  Treat it the same as a syntax error.

I haven't attempted to analyze the exact consequences of setting any
of the float GUCs to NaN, but since they're quite unlikely to be good,
this seems like a back-patchable bug fix.

Note: we don't need an explicit test for +-Infinity because those will
be rejected by existing range checks.  I added a regression test for
that in HEAD, but not older branches because the spelling of the value
in the error message will be platform-dependent in branches where we
don't always use port/snprintf.c.

Discussion: https://postgr.es/m/1798.1552165479@sss.pgh.pa.us
2019-03-10 12:58:52 -04:00
556fdd49c9 Further fixing for multi-row VALUES lists for updatable views.
Previously, rewriteTargetListIU() generated a list of attribute
numbers from the targetlist, which were passed to rewriteValuesRTE(),
which expected them to contain the same number of entries as there are
columns in the VALUES RTE, and to be in the same order. That was fine
when the target relation was a table, but for an updatable view it
could be broken in at least three different ways ---
rewriteTargetListIU() could insert additional targetlist entries for
view columns with defaults, the view columns could be in a different
order from the columns of the underlying base relation, and targetlist
entries could be merged together when assigning to elements of an
array or composite type. As a result, when recursing to the base
relation, the list of attribute numbers generated from the rewritten
targetlist could no longer be relied upon to match the columns of the
VALUES RTE. We got away with that prior to 41531e42d3 because it used
to always be the case that rewriteValuesRTE() did nothing for the
underlying base relation, since all DEFAULTS had already been replaced
when it was initially invoked for the view, but that was incorrect
because it failed to apply defaults from the base relation.

Fix this by examining the targetlist entries more carefully and
picking out just those that are simple Vars referencing the VALUES
RTE. That's sufficient for the purposes of rewriteValuesRTE(), which
is only responsible for dealing with DEFAULT items in the VALUES
RTE. Any DEFAULT item in the VALUES RTE that doesn't have a matching
simple-Var-assignment in the targetlist is an error which we complain
about, but in theory that ought to be impossible.

Additionally, move this code into rewriteValuesRTE() to give a clearer
separation of concerns between the 2 functions. There is no need for
rewriteTargetListIU() to know about the details of the VALUES RTE.

While at it, fix the comment for rewriteValuesRTE() which claimed that
it doesn't support array element and field assignments --- that hasn't
been true since a3c7a993d5 (9.6 and later).

Back-patch to all supported versions, with minor differences for the
pre-9.6 branches, which don't support array element and field
assignments to the same column in multi-row VALUES lists.

Reviewed by Amit Langote.

Discussion: https://postgr.es/m/15623-5d67a46788ec8b7f@postgresql.org
2019-03-03 10:57:37 +00:00
0fe3f6dd45 Fix plan created for inherited UPDATE/DELETE with all tables excluded.
In the case where inheritance_planner() finds that every table has
been excluded by constraints, it thought it could get away with
making a plan consisting of just a dummy Result node.  While certainly
there's no updating or deleting to be done, this had two user-visible
problems: the plan did not report the correct set of output columns
when a RETURNING clause was present, and if there were any
statement-level triggers that should be fired, it didn't fire them.

Hence, rather than only generating the dummy Result, we need to
stick a valid ModifyTable node on top, which requires a tad more
effort here.

It's been broken this way for as long as inheritance_planner() has
known about deleting excluded subplans at all (cf commit 635d42e9c),
so back-patch to all supported branches.

Amit Langote and Tom Lane, per a report from Petr Fedorov.

Discussion: https://postgr.es/m/5da6f0f0-1364-1876-6978-907678f89a3e@phystech.edu
2019-02-22 12:23:00 -05:00
90f91bfbd7 Fix DEFAULT-handling in multi-row VALUES lists for updatable views.
INSERT ... VALUES for a single VALUES row is implemented differently
from a multi-row VALUES list, which causes inconsistent behaviour in
the way that DEFAULT items are handled. In particular, when inserting
into an auto-updatable view on top of a table with a column default, a
DEFAULT item in a single VALUES row gets correctly replaced with the
table column's default, but for a multi-row VALUES list it is replaced
with NULL.

Fix this by allowing rewriteValuesRTE() to leave DEFAULT items in the
VALUES list untouched if the target relation is an auto-updatable view
and has no column default, deferring DEFAULT-expansion until the query
against the base relation is rewritten. For all other types of target
relation, including tables and trigger- and rule-updatable views, we
must continue to replace DEFAULT items with NULL in the absence of a
column default.

This is somewhat complicated by the fact that if an auto-updatable
view has DO ALSO rules attached, the VALUES lists for the product
queries need to be handled differently from the original query, since
the product queries need to act like rule-updatable views whereas the
original query has auto-updatable view semantics.

Back-patch to all supported versions.

Reported by Roger Curley (bug #15623). Patch by Amit Langote and me.

Discussion: https://postgr.es/m/15623-5d67a46788ec8b7f@postgresql.org
2019-02-20 08:22:52 +00:00
0b476cf623 Fix CREATE VIEW to allow zero-column views.
We should logically have allowed this case when we allowed zero-column
tables, but it was overlooked.

Although this might be thought a feature addition, it's really a bug
fix, because it was possible to create a zero-column view via
the convert-table-to-view code path, and then you'd have a situation
where dump/reload would fail.  Hence, back-patch to all supported
branches.

Arrange the added test cases to provide coverage of the related
pg_dump code paths (since these views will be dumped and reloaded
during the pg_upgrade regression test).  I also made them test
the case where pg_dump has to postpone the view rule into post-data,
which disturbingly had no regression coverage before.

Report and patch by Ashutosh Sharma (test case by me)

Discussion: https://postgr.es/m/CAE9k0PkmHdeSaeZt2ujnb_cKucmK3sDDceDzw7+d5UZoNJPYOg@mail.gmail.com
2019-02-17 12:37:32 -05:00
06cdab9df4 Fix support for CREATE TABLE IF NOT EXISTS AS EXECUTE
The grammar IF NOT EXISTS for CTAS is supported since 9.5 and documented
as such, however the case of using EXECUTE as query has never been
covered as EXECUTE CTAS statements and normal CTAS statements are parsed
separately.

Author: Andreas Karlsson
Discussion: https://postgr.es/m/2ddcc188-e37c-a0be-32bf-a56b07c3559e@proxel.se
Backpatch-through: 9.5
2019-02-15 17:12:51 +09:00
1afcf6aed7 Solve cross-version-upgrade testing problem induced by 1fb57af92.
Renaming varchar_transform to varchar_support had a side effect
I hadn't foreseen: the core regression tests leave around a
transform object that relies on that function, so the name
change breaks cross-version upgrade tests, because the name
used in the older branches doesn't match.

Since the dependency on varchar_transform was chosen with the
aid of a dartboard anyway (it would surely not work as a
language transform support function), fix by just choosing
a different random builtin function with the right signature.
Also add some comments explaining why this isn't horribly unsafe.

I chose to make the same substitution in a couple of other
copied-and-pasted test cases, for consistency, though those
aren't directly contributing to the testing problem.

Per buildfarm.  Back-patch, else it doesn't fix the problem.
2019-02-09 21:02:06 -05:00
7b8c6ef1e0 Fix searchpath and module location for pg_rewind and ssl TAP tests
The modules RewindTest.pm and ServerSetup.pm are really only useful for
TAP tests, so they really belong in the TAP test directories. In
addition, ServerSetup.pm is renamed to SSLServer.pm.

The test scripts have their own directories added to the search path so
that the relocated modules will be found, regardless of where the tests
are run from, even on modern perl where "." is no longer in the
searchpath.

Discussion: https://postgr.es/m/e4b0f366-269c-73c3-9c90-d9cb0f4db1f9@2ndQuadrant.com

Backpatch as appropriate to 9.5
2019-02-07 11:10:59 -05:00
741ee48900 Allow UNLISTEN in hot-standby mode.
Since LISTEN is (still) disallowed, UNLISTEN must be a no-op in a
hot-standby session, and so there's no harm in allowing it.  This
change allows client code to not worry about whether it's connected
to a primary or standby server when performing session-state-reset
type activities.  (Note that DISCARD ALL, which includes UNLISTEN,
was already allowed, making it inconsistent to reject UNLISTEN.)

Per discussion, back-patch to all supported versions.

Shay Rojansky, reviewed by Mi Tar

Discussion: https://postgr.es/m/CADT4RqCf2gA_TJtPAjnGzkC3ZiexfBZiLmA-mV66e4UyuVv8bA@mail.gmail.com
2019-01-25 21:14:31 -05:00
91448e7dcc Postpone aggregate checks until after collation is assigned.
Previously, parseCheckAggregates was run before
assign_query_collations, but this causes problems if any expression
has already had a collation assigned by some transform function (e.g.
transformCaseExpr) before parseCheckAggregates runs. The differing
collations would cause expressions not to be recognized as equal to
the ones in the GROUP BY clause, leading to spurious errors about
unaggregated column references.

The result was that CASE expr WHEN val ... would fail when "expr"
contained a GROUPING() expression or matched one of the group by
expressions, and where collatable types were involved; whereas the
supposedly identical CASE WHEN expr = val ... would succeed.

Backpatch all the way; this appears to have been wrong ever since
collations were introduced.

Per report from Guillaume Lelarge, analysis and patch by me.

Discussion: https://postgr.es/m/CAECtzeVSO_US8C2Khgfv54ZMUOBR4sWq+6_bLrETnWExHT=rFg@mail.gmail.com
Discussion: https://postgr.es/m/87muo0k0c7.fsf@news-spur.riddles.org.uk
2019-01-17 06:29:43 +00:00
7f25768126 Update ssl test certificates and keys
Debian testing and newer now require that RSA and DHE keys are at
least 2048 bit long and no longer allow SHA-1 for signatures in
certificates.  This is currently causing the ssl tests to fail there
because the test certificates and keys have been created in violation
of those conditions.

Update the parameters to create the test files and create a new set of
test files.

Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
Reported-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/20180917131340.GE31460%40paquier.xyz
2019-01-03 15:08:32 +01:00
2fe431f1be pg_regress: Promptly detect failed postmaster startup.
Detect it the way pg_ctl's wait_for_postmaster() does.  When pg_regress
spawned a postmaster that failed startup, we were detecting that only
with "pg_regress: postmaster did not respond within 60 seconds".
Back-patch to 9.4 (all supported versions).

Reviewed by Tom Lane.

Discussion: https://postgr.es/m/20181231172922.GA199150@gust.leadboat.com
2018-12-31 13:51:26 -08:00
0a323ae676 Ignore inherited temp relations from other sessions when truncating
Inheritance trees can include temporary tables if the parent is
permanent, which makes possible the presence of multiple temporary
children from different sessions.  Trying to issue a TRUNCATE on the
parent in this scenario causes a failure, so similarly to any other
queries just ignore such cases, which makes TRUNCATE work
transparently.

This makes truncation behave similarly to any other DML query working on
the parent table with queries which need to be issues on children.  A
set of isolation tests is added to cover basic cases.

Reported-by: Zhou Digoal
Author: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/15565-ce67a48d0244436a@postgresql.org
Backpatch-through: 9.4
2018-12-27 10:17:26 +09:00
c7567e09d5 Fix use-after-free bug when renaming constraints
This is an oversight from recent commit b13fd344.  While on it, tweak
the previous test with a better name for the renamed primary key.

Detected by buildfarm member prion which forces relation cache release
with -DRELCACHE_FORCE_RELEASE.  Back-patch down to 9.4 as the previous
commit.
2018-12-17 12:44:02 +09:00
5812be7b63 Make constraint rename issue relcache invalidation on target relation
When a constraint gets renamed, it may have associated with it a target
relation (for example domain constraints don't have one).  Not
invalidating the target relation cache when issuing the renaming can
result in issues with subsequent commands that refer to the old
constraint name using the relation cache, causing various failures.  One
pattern spotted was using CREATE TABLE LIKE after a constraint
renaming.

Reported-by: Stuart <sfbarbee@gmail.com>
Author: Amit Langote
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/2047094.V130LYfLq4@station53.ousa.org
2018-12-17 10:36:34 +09:00
6548d62a98 Repair bogus handling of multi-assignment Params in upper plan levels.
Our support for multiple-set-clauses in UPDATE assumes that the Params
referencing a MULTIEXPR_SUBLINK SubPlan will appear before that SubPlan
in the targetlist of the plan node that calculates the updated row.
(Yeah, it's a hack...)  In some PG branches it's possible that a Result
node gets inserted between the primary calculation of the update tlist
and the ModifyTable node.  setrefs.c did the wrong thing in this case
and left the upper-level Params as Params, causing a crash at runtime.
What it should do is replace them with "outer" Vars referencing the child
plan node's output.  That's a result of careless ordering of operations
in fix_upper_expr_mutator, so we can fix it just by reordering the code.

Fix fix_join_expr_mutator similarly for consistency, even though join
nodes could never appear in such a context.  (In general, it seems
likely to be a bit cheaper to use Vars than Params in such situations
anyway, so this patch might offer a tiny performance improvement.)

The hazard extends back to 9.5 where the MULTIEXPR_SUBLINK stuff
was introduced, so back-patch that far.  However, this may be a live
bug only in 9.6.x and 10.x, as the other branches don't seem to want
to calculate the final tlist below the Result node.  (That plan shape
change between branches might be a mini-bug in itself, but I'm not
really interested in digging into the reasons for that right now.
Still, add a regression test memorializing what we expect there,
so we'll notice if it changes again.)

Per bug report from Eduards Bezverhijs.

Discussion: https://postgr.es/m/b6cd572a-3e44-8785-75e9-c512a5a17a73@tieto.com
2018-12-12 13:49:42 -05:00
63210c864b Fix test_rls_hooks to assign expression collations properly.
This module overlooked this necessary fixup step on the results of
transformWhereClause().  It accidentally worked anyway, because the
constructed expression involved type "name" which is not collatable,
but it fell over while I was experimenting with changing "name" to
be collatable.

Back-patch, not because there's any live bug here in back branches,
but because somebody might use this code as a model for some real
application and then not understand why it doesn't work.
2018-12-11 11:48:01 -05:00
4d5cfb9115 Fix misapplication of pgstat_count_truncate to wrong relation.
The stanza of ExecuteTruncate[Guts] that truncates a target table's toast
relation re-used the loop local variable "rel" to reference the toast rel.
This was safe enough when written, but commit d42358efb added code below
that that supposed "rel" still pointed to the parent table.  Therefore,
the stats counter update was applied to the wrong relcache entry (the
toast rel not the user rel); and if we were unlucky and that relcache
entry had been flushed during reindex_relation, very bad things could
ensue.

(I'm surprised that CLOBBER_CACHE_ALWAYS testing hasn't found this.
I'm even more surprised that the problem wasn't detected during the
development of d42358efb; it must not have been tested in any case
with a toast table, as the incorrect stats counts are very obvious.)

To fix, replace use of "rel" in that code branch with a more local
variable.  Adjust test cases added by d42358efb so that some of them
use tables with toast tables.

Per bug #15540 from Pan Bian.  Back-patch to 9.5 where d42358efb came in.

Discussion: https://postgr.es/m/15540-01078812338195c0@postgresql.org
2018-12-07 12:12:00 -05:00
298510caee Update additional float4/8 expected-output files.
I forgot that the back branches have more variant files than HEAD :-(.
Per buildfarm.

Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
2018-11-24 13:53:12 -05:00
1e78603a54 Fix float-to-integer coercions to handle edge cases correctly.
ftoi4 and its sibling coercion functions did their overflow checks in
a way that looked superficially plausible, but actually depended on an
assumption that the MIN and MAX comparison constants can be represented
exactly in the float4 or float8 domain.  That fails in ftoi4, ftoi8,
and dtoi8, resulting in a possibility that values near the MAX limit will
be wrongly converted (to negative values) when they need to be rejected.

Also, because we compared before rounding off the fractional part,
the other three functions threw errors for values that really ought
to get rounded to the min or max integer value.

Fix by doing rint() first (requiring an assumption that it handles
NaN and Inf correctly; but dtoi8 and ftoi8 were assuming that already),
and by comparing to values that should coerce to float exactly, namely
INTxx_MIN and -INTxx_MIN.  Also remove some random cosmetic discrepancies
between these six functions.

This back-patches commits cbdb8b4c0 and 452b637d4.  In the 9.4 branch,
also back-patch the portion of 62e2a8dc2 that added PG_INTnn_MIN and
related constants to c.h, so that these functions can rely on them.

Per bug #15519 from Victor Petrovykh.

Patch by me; thanks to Andrew Gierth for analysis and discussion.

Discussion: https://postgr.es/m/15519-4fc785b483201ff1@postgresql.org
2018-11-24 12:45:50 -05:00
ced3aaa5b6 Fix old TAP tests' method for selecting a valid PGPORT value.
This code was trying to be paranoid, but it wasn't paranoid enough.
It only ensured that the selected port is in 0..65535, while most
Unix systems will refuse unprivileged attempts to use TCP port numbers
below 1024.

Change it to allow specification of ports 1024..65535, while if the
port is outside that range, map it into 49152..65535 which is the
port range used by our later branches.

The main reason we've not noticed this up to now is that it's not
important when testing over Unix-socket connections, only TCP,
and most of our test code deliberately prevents the postmaster from
opening any TCP ports.  However, the SSL tests do open up a TCP port,
and I believe this explains why buildfarm member chipmunk has been
failing the SSL tests in 9.5: it's picking a reserved port number.

Patch in 9.5 and 9.4.  Later branches do not use this code.
2018-11-19 20:01:35 -05:00
cc02db82c0 Lower privilege level of programs calling regression_main
On Windows this mean that the regression tests can now safely and
successfully run as Administrator, which is useful in situations like
Appveyor. Elsewhere it's a no-op.

Backpatch to 9.5 - this is harder in earlier branches and not worth the
trouble.

Discussion: https://postgr.es/m/650b0c29-9578-8571-b1d2-550d7f89f307@2ndQuadrant.com
2018-10-20 09:11:18 -04:00
ac3be116a7 Client-side fixes for delayed NOTIFY receipt.
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket.  (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.)  The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.

Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop.  Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call.  This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages.  Hence, adjust code as well as documentation examples
to do it like that.

Back-patch to 9.5 to match related server fixes.  In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.

Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
2018-10-19 22:22:57 -04:00
7a1e7b291d Fix minor bug in isolationtester.
If the lock wait query failed, isolationtester would report the
PQerrorMessage from some other connection, meaning there would be
no message or an unrelated one.  This seems like a pretty unlikely
occurrence, but if it did happen, this bug could make it really
difficult/confusing to figure out what happened.  That seems to
justify patching all the way back.

In passing, clean up another place where the "wrong" conn was used
for an error report.  That one's not actually buggy because it's
a different alias for the same connection, but it's still confusing
to the reader.
2018-10-17 15:06:38 -04:00
7249d1d641 Avoid rare race condition in privileges.sql regression test.
We created a temp table, then switched to a new session, leaving
the old session to clean up its temp objects in background.  If that
took long enough, the eventual attempt to drop the user that owns
the temp table could fail, as exhibited today by sidewinder.
Fix by dropping the temp table explicitly when we're done with it.

It's been like this for quite some time, so back-patch to all
supported branches.

Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2018-10-16%2014%3A45%3A00
2018-10-16 13:56:58 -04:00
43cc4e49ed Remove abstime, reltime, tinterval tables from old regression databases.
In the back branches, drop these tables after the regression tests are
done with them.  This fixes failures of cross-branch pg_upgrade testing
caused by these types having been removed in v12.  We do lose the ability
to test dump/restore behavior with these types in the back branches, but
the actual loss of code coverage seems to be nil given that there's nothing
very special about these types.

Discussion: https://postgr.es/m/20181009192237.34wjp3nmw7oynmmr@alap3.anarazel.de
2018-10-12 19:33:56 -04:00
dad4df0fc8 Fix corner-case failures in has_foo_privilege() family of functions.
The variants of these functions that take numeric inputs (OIDs or
column numbers) are supposed to return NULL rather than failing
on bad input; this rule reduces problems with snapshot skew when
queries apply the functions to all rows of a catalog.

has_column_privilege() had careless handling of the case where the
table OID didn't exist.  You might get something like this:
	select has_column_privilege(9999,'nosuchcol','select');
	ERROR:  column "nosuchcol" of relation "(null)" does not exist
or you might get a crash, depending on the platform's printf's response
to a null string pointer.

In addition, while applying the column-number variant to a dropped
column returned NULL as desired, applying the column-name variant
did not:
	select has_column_privilege('mytable','........pg.dropped.2........','select');
	ERROR:  column "........pg.dropped.2........" of relation "mytable" does not exist
It seems better to make this case return NULL as well.

Also, the OID-accepting variants of has_foreign_data_wrapper_privilege,
has_server_privilege, and has_tablespace_privilege didn't follow the
principle of returning NULL for nonexistent OIDs.  Superusers got TRUE,
everybody else got an error.

Per investigation of Jaime Casanova's report of a new crash in HEAD.
These behaviors have been like this for a long time, so back-patch to
all supported branches.

Patch by me; thanks to Stephen Frost for discussion and review

Discussion: https://postgr.es/m/CAJGNTeP=-6Gyqq5TN9OvYEydi7Fv1oGyYj650LGTnW44oAzYCg@mail.gmail.com
2018-10-02 11:54:13 -04:00
c8a978bf4c Fix failure in WHERE CURRENT OF after rewinding the referenced cursor.
In a case where we have multiple relation-scan nodes in a cursor plan,
such as a scan of an inheritance tree, it's possible to fetch from a
given scan node, then rewind the cursor and fetch some row from an
earlier scan node.  In such a case, execCurrent.c mistakenly thought
that the later scan node was still active, because ExecReScan hadn't
done anything to make it look not-active.  We'd get some sort of
failure in the case of a SeqScan node, because the node's scan tuple
slot would be pointing at a HeapTuple whose t_self gets reset to
invalid by heapam.c.  But it seems possible that for other relation
scan node types we'd actually return a valid tuple TID to the caller,
resulting in updating or deleting a tuple that shouldn't have been
considered current.  To fix, forcibly clear the ScanTupleSlot in
ExecScanReScan.

Another issue here, which seems only latent at the moment but could
easily become a live bug in future, is that rewinding a cursor does
not necessarily lead to *immediately* applying ExecReScan to every
scan-level node in the plan tree.  Upper-level nodes will think that
they can postpone that call if their child node is already marked
with chgParam flags.  I don't see a way for that to happen today in
a plan tree that's simple enough for execCurrent.c's search_plan_tree
to understand, but that's one heck of a fragile assumption.  So, add
some logic in search_plan_tree to detect chgParam flags being set on
nodes that it descended to/through, and assume that that means we
should consider lower scan nodes to be logically reset even if their
ReScan call hasn't actually happened yet.

Per bug #15395 from Matvey Arye.  This has been broken for a long time,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/153764171023.14986.280404050547008575@wrigleys.postgresql.org
2018-09-23 16:05:45 -04:00
9b14bbd52c Fix failure with initplans used conditionally during EvalPlanQual rechecks.
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally.  For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.

This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.

To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment.  This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another.  It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.

It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.

Back-patch all the way.  Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3.  (This isn't the first time back-patches have
needed that, and it exhausted my patience.)  I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.

Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.

Discussion: https://postgr.es/m/A033A40A-B234-4324-BE37-272279F7B627@tripadvisor.com
2018-09-15 13:42:34 -04:00
77c2663de7 Repair bug in regexp split performance improvements.
Commit c8ea87e4b introduced a temporary conversion buffer for
substrings extracted during regexp splits. Unfortunately the code that
sized it was failing to ignore the effects of ignored degenerate
regexp matches, so for regexp_split_* calls it could under-size the
buffer in such cases.

Fix, and add some regression test cases (though those will only catch
the bug if run in a multibyte encoding).

Backpatch to 9.3 as the faulty code was.

Thanks to the PostGIS project, Regina Obe and Paul Ramsey for the
report (via IRC) and assistance in analysis. Patch by me.
2018-09-12 19:52:10 +01:00
af988d1301 Fix lexing of standard multi-character operators in edge cases.
Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators)
and 865f14a2d (which added support for the standard => notation for
named arguments) created a class of lexer tokens which look like
multi-character operators but which have their own token IDs distinct
from Op. However, longest-match rules meant that following any of
these tokens with another operator character, as in (1<>-1), would
cause them to be incorrectly returned as Op.

The error here isn't immediately obvious, because the parser would
usually still find the correct operator via the Op token, but there
were more subtle problems:

1. If immediately followed by a comment or +-, >= <= <> would be given
   the old precedence of Op rather than the correct new precedence;

2. If followed by a comment, != would be returned as Op rather than as
   NOT_EQUAL, causing it not to be found at all;

3. If followed by a comment or +-, the => token for named arguments
   would be lexed as Op, causing the argument to be mis-parsed as a
   simple expression, usually causing an error.

Fix by explicitly checking for the operators in the {operator} code
block in addition to all the existing special cases there.

Backpatch to 9.5 where the problem was introduced.

Analysis and patch by me; review by Tom Lane.
Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
2018-08-23 21:35:53 +01:00
d2ecc27c37 Set scan direction appropriately for SubPlans (bug #15336)
When executing a SubPlan in an expression, the EState's direction
field was left alone, resulting in an attempt to execute the subplan
backwards if it was encountered during a backwards scan of a cursor.
Also, though much less likely, it was possible to reach the execution
of an InitPlan while in backwards-scan state.

Repair by saving/restoring estate->es_direction and forcing forward
scan mode in the relevant places.

Backpatch all the way, since this has been broken since 8.3 (prior to
commit c7ff7663e, SubPlans had their own EStates rather than sharing
the parent plan's, so there was no confusion over scan direction).

Per bug #15336 reported by Vladimir Baranoff; analysis and patch by
me, review by Tom Lane.

Discussion: https://postgr.es/m/153449812167.1304.1741624125628126322@wrigleys.postgresql.org
2018-08-17 16:20:04 +01:00
c81062e8e1 Clean up assorted misuses of snprintf()'s result value.
Fix a small number of places that were testing the result of snprintf()
but doing so incorrectly.  The right test for buffer overrun, per C99,
is "result >= bufsize" not "result > bufsize".  Some places were also
checking for failure with "result == -1", but the standard only says
that a negative value is delivered on failure.

(Note that this only makes these places correct if snprintf() delivers
C99-compliant results.  But at least now these places are consistent
with all the other places where we assume that.)

Also, make psql_start_test() and isolation_start_test() check for
buffer overrun while constructing their shell commands.  There seems
like a higher risk of overrun, with more severe consequences, here
than there is for the individual file paths that are made elsewhere
in the same functions, so this seemed like a worthwhile change.

Also fix guc.c's do_serialize() to initialize errno = 0 before
calling vsnprintf.  In principle, this should be unnecessary because
vsnprintf should have set errno if it returns a failure indication ...
but the other two places this coding pattern is cribbed from don't
assume that, so let's be consistent.

These errors are all very old, so back-patch as appropriate.  I think
that only the shell command overrun cases are even theoretically
reachable in practice, but there's not much point in erroneous error
checks.

Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
2018-08-15 16:29:32 -04:00
5ad143cda0 Fix INSERT ON CONFLICT UPDATE through a view that isn't just SELECT *.
When expanding an updatable view that is an INSERT's target, the rewriter
failed to rewrite Vars in the ON CONFLICT UPDATE clause.  This accidentally
worked if the view was just "SELECT * FROM ...", as the transformation
would be a no-op in that case.  With more complicated view targetlists,
this omission would often lead to "attribute ... has the wrong type" errors
or even crashes, as reported by Mario De Frutos Dieguez.

Fix by adding code to rewriteTargetView to fix up the data structure
correctly.  The easiest way to update the exclRelTlist list is to rebuild
it from scratch looking at the new target relation, so factor the code
for that out of transformOnConflictClause to make it sharable.

In passing, avoid duplicate permissions checks against the EXCLUDED
pseudo-relation, and prevent useless view expansion of that relation's
dummy RTE.  The latter is only known to happen (after this patch) in cases
where the query would fail later due to not having any INSTEAD OF triggers
for the view.  But by exactly that token, it would create an unintended
and very poorly tested state of the query data structure, so it seems like
a good idea to prevent it from happening at all.

This has been broken since ON CONFLICT was introduced, so back-patch
to 9.5.

Dean Rasheed, based on an earlier patch by Amit Langote;
comment-kibitzing and back-patching by me

Discussion: https://postgr.es/m/CAFYwGJ0xfzy8jaK80hVN2eUWr6huce0RU8AgU04MGD00igqkTg@mail.gmail.com
2018-08-04 19:38:59 -04:00
12f2d814ac Further fixes for quoted-list GUC values in pg_dump and ruleutils.c.
Commits 742869946 et al turn out to be a couple bricks shy of a load.
We were dumping the stored values of GUC_LIST_QUOTE variables as they
appear in proconfig or setconfig catalog columns.  However, although that
quoting rule looks a lot like SQL-identifier double quotes, there are two
critical differences: empty strings ("") are legal, and depending on which
variable you're considering, values longer than NAMEDATALEN might be valid
too.  So the current technique fails altogether on empty-string list
entries (as reported by Steven Winfield in bug #15248) and it also risks
truncating file pathnames during dump/reload of GUC values that are lists
of pathnames.

To fix, split the stored value without any downcasing or truncation,
and then emit each element as a SQL string literal.

This is a tad annoying, because we now have three copies of the
comma-separated-string splitting logic in varlena.c as well as a fourth
one in dumputils.c.  (Not to mention the randomly-different-from-those
splitting logic in libpq...)  I looked at unifying these, but it would
be rather a mess unless we're willing to tweak the API definitions of
SplitIdentifierString, SplitDirectoriesString, or both.  That might be
worth doing in future; but it seems pretty unsafe for a back-patched
bug fix, so for now accept the duplication.

Back-patch to all supported branches, as the previous fix was.

Discussion: https://postgr.es/m/7585.1529435872@sss.pgh.pa.us
2018-07-31 13:00:08 -04:00
a9793e0735 Fix misc typos, mostly in comments.
A collection of typos I happened to spot while reading code, as well as
grepping for common mistakes.

Backpatch to all supported versions, as applicable, to avoid conflicts
when backporting other commits in the future.
2018-07-18 16:54:54 +03:00
6532ca57ad Prevent accidental linking of system-supplied copies of libpq.so etc.
Back-patch commit dddfc4cb2, which broke LDFLAGS and related Makefile
variables into two parts, one for within-build-tree library references and
one for external libraries, to ensure that the order of -L flags has all
of the former before all of the latter.  This turns out to fix a problem
recently noted on buildfarm member peripatus, that we attempted to
incorporate code from libpgport.a into a shared library.  That will fail on
platforms that are sticky about putting non-PIC code into shared libraries.
(It's quite surprising we hadn't seen such failures before, since the code
in question has been like that for a long time.)

I think that peripatus' problem could have been fixed with just a subset
of this patch; but since the previous issue of accidentally linking to the
wrong copy of a Postgres shlib seems likely to bite people in the field,
let's just back-patch the whole change.  Now that commit dddfc4cb2 has
survived some beta testing, I'm less afraid to back-patch it than I was
at the time.

This also fixes undesired inclusion of "-DFRONTEND" in pg_config's CPPFLAGS
output (in 9.6 and up) and undesired inclusion of "-L../../src/common" in
its LDFLAGS output (in all supported branches).

Back-patch to v10 and older branches; this is already in v11.

Discussion: https://postgr.es/m/20180704234304.bq2dxispefl65odz@ler-imac.local
2018-07-09 17:23:31 -04:00
d5ad0e19d1 Fix libpq example programs
When these programs call pg_catalog.set_config, they need to check for
PGRES_TUPLES_OK instead of PGRES_COMMAND_OK.  Fix for
5770172cb0c9df9e6ce27c507b449557e5b45124.

Reported-by: Ideriha, Takeshi <ideriha.takeshi@jp.fujitsu.com>
2018-07-01 14:09:16 +02:00
ad73c07b44 Properly schema-qualify additional object types in getObjectDescription().
Collations, conversions, extended statistics objects (in >= v10),
and all four types of text search objects have schema-qualified names.
getObjectDescription() ignored that and would emit just the base name of
the object, potentially producing wrong or at least highly misleading
output.  Fix it to add the schema name whenever the object is not "visible"
in the current search path, as is the rule for other schema-qualifiable
object types.

Although in common situations the output won't change, this seems to me
(tgl) to be a bug worthy of back-patching, hence do so.

Kyotaro Horiguchi, per a complaint from me

Discussion: https://postgr.es/m/20180522.182020.114074746.horiguchi.kyotaro@lab.ntt.co.jp
2018-05-24 12:07:41 -04:00
6d7629094a Fix misprocessing of equivalence classes involving record_eq().
canonicalize_ec_expression() is supposed to agree with coerce_type() as to
whether a RelabelType should be inserted to make a subexpression be valid
input for the operators of a given opclass.  However, it did the wrong
thing with named-composite-type inputs to record_eq(): it put in a
RelabelType to RECORDOID, which the parser doesn't.  In some cases this was
harmless because all code paths involving a particular equivalence class
did the same thing, but in other cases this would result in failing to
recognize a composite-type expression as being a member of an equivalence
class that it actually is a member of.  The most obvious bad effect was to
fail to recognize that an index on a composite column could provide the
sort order needed for a mergejoin on that column, as reported by Teodor
Sigaev.  I think there might be other, subtler, cases that result in
misoptimization.  It also seems possible that an unwanted RelabelType
would sometimes get into an emitted plan --- but because record_eq and
friends don't examine the declared type of their input expressions, that
would not create any visible problems.

To fix, just treat RECORDOID as if it were a polymorphic type, which in
some sense it is.  We might want to consider formalizing that a bit more
someday, but for the moment this seems to be the only place where an
IsPolymorphicType() test ought to include RECORDOID as well.

This has been broken for a long time, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/a6b22369-e3bf-4d49-f59d-0c41d3551e81@sigaev.ru
2018-05-16 13:46:09 -04:00