This feature has been a thorn in our sides for a long time, causing
many grammatical ambiguity problems. It doesn't seem worth the
pain to continue to support it, so remove it.
There are some follow-on improvements we can make in the grammar,
but this commit only removes the bare minimum number of productions,
plus assorted backend support code.
Note that pg_dump and psql continue to have full support, since
they may be used against older servers. However, pg_dump warns
about postfix operators. There is also a check in pg_upgrade.
Documentation-wise, I (tgl) largely removed the "left unary"
terminology in favor of saying "prefix operator", which is
a more standard and IMO less confusing term.
I included a catversion bump, although no initial catalog data
changes here, to mark the boundary at which oprkind = 'r'
stopped being valid in pg_operator.
Mark Dilger, based on work by myself and Robert Haas;
review by John Naylor
Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
The "!" operator is our only built-in postfix operator. Remove it,
on the way to removal of grammar support for postfix operators.
There is also a "!!" prefix operator, but since it's been marked
deprecated for most of its existence, we might as well remove it too.
Also zap the SQL alias function numeric_fac(), which seems to have
equally little reason to live.
Mark Dilger, based on work by myself and Robert Haas;
review by John Naylor
Discussion: https://postgr.es/m/38ca86db-42ab-9b48-2902-337a0d6b8311@2ndquadrant.com
This adds a new optional support function to the GiST access method:
sortsupport. If it is defined, the GiST index is built by sorting all data
to the order defined by the sortsupport's comparator function, and packing
the tuples in that order to GiST pages. This is similar to how B-tree
index build works, and is much faster than inserting the tuples one by
one. The resulting index is smaller too, because the pages are packed more
tightly, upto 'fillfactor'. The normal build method works by splitting
pages, which tends to lead to more wasted space.
The quality of the resulting index depends on how good the opclass-defined
sort order is. A good order preserves locality of the input data.
As the first user of this facility, add 'sortsupport' function to the
point_ops opclass. It sorts the points in Z-order (aka Morton Code), by
interleaving the bits of the X and Y coordinates.
Author: Andrey Borodin
Reviewed-by: Pavel Borisov, Thomas Munro
Discussion: https://www.postgresql.org/message-id/1A36620E-CAD8-4267-9067-FB31385E7C0D%40yandex-team.ru
ALTER TABLE commands in an extension script are added to an event
trigger command list; but starting with commit b5810de3f4 they do so in
a memory context that's too short-lived, so when execution ends and time
comes to use the entries, they've already been freed.
(This would also be a problem with ALTER TABLE commands in a
multi-command query string, but these serendipitously end in
PortalContext -- which probably explains why it took so long for this to
be reported.)
Fix by using the memory context specifically set for that, instead.
Backpatch to 13, where the aforementioned commit appeared.
Reported-by: Philippe Beaudoin
Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Discussion: https://postgr.es/m/20200902193715.6e0269d4@firost
transformCreateStmt() adjusts the transformed statement's RangeVar
to specify the target schema explicitly, for the express reason
of making sure that auxiliary statements derived by parse
transformation operate on the right table. But the refactoring
I did in commit 502898192 got this wrong and passed the untransformed
RangeVar to expandTableLikeClause(). This could lead to assertion
failures or weird misbehavior if the wrong table was accessed.
Per report from Alexander Lakhin. Like the previous patch, back-patch
to all supported branches.
Discussion: https://postgr.es/m/05051f9d-b32b-cb35-6735-0e9f2ab86b5f@gmail.com
EXTRACT of date type is implemented as a wrapper around EXTRACT of
timestamp, so the code is already tested there. But the externally
visible behavior of EXTRACT on date is not recorded anywhere. Since
there is some discussion about reimplementing or refactoring some of
this, add some more explicit tests of EXTRACT on date, similar in
structure to existing EXTRACT tests on other data types.
Discussion: https://www.postgresql.org/message-id/flat/42b73d2d-da12-ba9f-570a-420e0cce19d9@phystech.edu
Move applicable code out of RelationBuildDesc(), which nailed relations
bypass. Non-assert builds experienced no known problems. Back-patch to
v13, where commit c6b92041d38512a4176ed76ad06f713d2e6c01a8 introduced
rd_firstRelfilenodeSubid.
Kyotaro Horiguchi. Reported by Justin Pryzby.
Discussion: https://postgr.es/m/20200907023737.GA7158@telsasoft.com
Partitioning tuple route code assumes that the partition chosen while
descending the partition hierarchy is always the correct one. This is
true except when the partition is the default partition and another
partition has been added concurrently: the partition constraint changes
and we don't recheck it. This can lead to tuples mistakenly being added
to the default partition that should have been rejected.
Fix by rechecking the default partition constraint while descending the
hierarchy.
An isolation test based on the reproduction steps described by Hao Wu
(with tweaks for extra coverage) is included.
Backpatch to 12, where this bug came in with 898e5e3290a7.
Reported by: Hao Wu <hawu@vmware.com>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CA+HiwqFqBmcSSap4sFnCBUEL_VfOMmEKaQ3gwUhyfa4c7J_-nA@mail.gmail.com
Discussion: https://postgr.es/m/DM5PR0501MB3910E97A9EDFB4C775CF3D75A42F0@DM5PR0501MB3910.namprd05.prod.outlook.com
The isolation test added by a6642b3 is proving to be unstable, as once
the first transaction holding a lock on the top-most partitioned table
or on a partition commits, the commit order of the follow-up DROP TABLE
and REINDEX could become reversed depending on the timing.
The only part of the test that could be entirely reliable is the one
using a SHARE lock, allowing REINDEX to commit first, but it is the
least interesting of the set.
Per buildfarm members rorqual and mylodon.
Discussion: https://postgr.es/m/E1kFSBj-00062c-Mu@gemulon.postgresql.org
Until now, REINDEX was not able to work with partitioned tables and
indexes, forcing users to reindex partitions one by one. This extends
REINDEX INDEX and REINDEX TABLE so as they can accept a partitioned
index and table in input, respectively, to reindex all the partitions
assigned to them with physical storage (foreign tables, partitioned
tables and indexes are then discarded).
This shares some logic with schema and database REINDEX as each
partition gets processed in its own transaction after building a list of
relations to work on. This choice has the advantage to minimize the
number of invalid indexes to one partition with REINDEX CONCURRENTLY in
the event a cancellation or failure in-flight, as the only indexes
handled at once in a single REINDEX CONCURRENTLY loop are the ones from
the partition being working on.
Isolation tests are added to emulate some cases I bumped into while
developing this feature, particularly with the concurrent drop of a
leaf partition reindexed. However, this is rather limited as LOCK would
cause REINDEX to block in the first transaction building the list of
partitions.
Per its multi-transaction nature, this new flavor cannot run in a
transaction block, similarly to REINDEX SCHEMA, SYSTEM and DATABASE.
Author: Justin Pryzby, Michael Paquier
Reviewed-by: Anastasia Lubennikova
Discussion: https://postgr.es/m/db12e897-73ff-467e-94cb-4af03705435f.adger.lj@alibaba-inc.com
This covers the functionality tests for streaming in-progress
subtransactions, streaming transactions containing rollback to savepoints,
and streaming transactions having DDLs.
Author: Tomas Vondra, Amit Kapila and Dilip Kumar
Reviewed-by: Dilip Kumar
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
Refactor replace_string() to use a StringInfo for the modifiable
string argument. This allows the string to be of indefinite size
initially and/or grow substantially during replacement. The previous
logic in convert_sourcefiles_in() had a hard-wired limit of 1024
bytes on any line in input/*.sql or output/*.out files. While we've
not had reports of trouble yet, it'd surely have bit us someday.
This also fixes replace_string() so it won't get into an infinite
loop if the string-to-be-replaced is a substring of the replacement.
That's unlikely to happen in current usage, but the function surely
shouldn't depend on it.
Also fix ecpg_filter() to use a StringInfo and thereby remove its
hard limit of 300 bytes on the length of an ecpg source line.
Asim Rama Praveen and Georgios Kokolatos,
reviewed by Alvaro Herrera and myself
Discussion: https://postgr.es/m/y9Dlk2QhiZ39DhaB1QE9mgZ95HcOQKZCNtGwN7XCRKMdBRBnX_0woaRUtTjloEp4PKA6ERmcUcfq3lPGfKPOJ5xX2TV-5WoRYyySeNHRzdw=@protonmail.com
This commit improves the dependency registrations by taking advantage of
the preliminary work done in 63110c62, to group together the insertion
of dependencies of the same type to pg_depend. With the current layer
of routines available, and as only dependencies of the same type can be
grouped, there are code paths still doing more than one multi-insert
when it is necessary to register dependencies of multiple types
(constraint and index creation are two cases doing that).
While on it, this refactors some of the code to use ObjectAddressSet()
when manipulating object addresses.
Author: Daniel Gustafsson, Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera
Discussion: https://postgr.es/m/20200807061619.GA23955@paquier.xyz
The basic tests that defined SQL functions didn't actually run the
functions to see if they worked. Add that, and also fix a minor
mistake in a function that was revealed by this. (This is not a
question of test coverage, since there are other places where SQL
functions are run, but it is a bit of a silly test design.)
Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
I happened to notice that the new test case I added in b55b4dad9
falls over if one runs "make check" repeatedly; though not in branches
after v10. That's because it was assuming that tmp_check/pgpass
wouldn't exist already. However, it's only been since v11 that the
Makefiles forcibly remove all of tmp_check/ before starting a TAP run.
This fix to unlink the file is therefore strictly necessary only in
v10 ... but it seems wisest to do it across the board, rather than
let the test rely on external logic to get the conditions right.
If this data is not collected, pg_dump segfaults if asked for column
inserts.
Fix by Fabrízio de Royes Mello
Backpatch to release 12 where the bug was introduced.
The "DROP ACCESS METHOD gist2" test will require locking the index
to be dropped and then its table; while most ordinary operations
lock a table first then its index. While no concurrent test scripts
should be touching fast_emp4000, autovacuum might chance to be
processing that table when the DROP runs, resulting in a deadlock
failure. This is pretty rare but we see it in the buildfarm from
time to time.
To fix, acquire a lock on fast_emp4000 before issuing the DROP.
Since the point of the exercise is mostly to prevent buildfarm
failures, back-patch to 9.6 where this test was introduced.
Discussion: https://postgr.es/m/839004.1599185607@sss.pgh.pa.us
Because sigsetjmp() will restore the initial state with signals blocked,
the code path in bgworker.c for reporting an error and exiting would
execute that way. Usually this is fairly harmless; but if a parallel
worker had an error message exceeding the shared-memory communication
buffer size (16K) it would lock up, because it would wait for a
resume-sending signal from its parallel leader which it would never
detect.
To fix, just unblock signals at the appropriate point.
This can be shown to fail back to 9.6. The lack of parallel query
infrastructure makes it difficult to provide a simple test case for
9.5; but I'm pretty sure the issue exists in some form there as well,
so apply the code change there too.
Vignesh C, reviewed by Bharath Rupireddy, Robert Haas, and myself
Discussion: https://postgr.es/m/CALDaNm1d1hHPZUg3xU4XjtWBOLCrA+-2cJcLpw-cePZ=GgDVfA@mail.gmail.com
A backslash at the end of a line now causes the next line to be appended
to the current one (effectively, the backslash and newline are discarded).
This allows long HBA entries to be created without legibility problems.
While we're here, get rid of the former hard-wired length limit on
pg_hba.conf lines, by using an expansible StringInfo buffer instead
of a fixed-size local variable.
Since the same code is used to read the ident map file, these changes
apply there as well.
Fabien Coelho, reviewed by Justin Pryzby and David Zhang
Discussion: https://postgr.es/m/alpine.DEB.2.21.2003251906140.15243@pseudo
To add support for streaming of in-progress transactions into the
built-in logical replication, we need to do three things:
* Extend the logical replication protocol, so identify in-progress
transactions, and allow adding additional bits of information (e.g.
XID of subtransactions).
* Modify the output plugin (pgoutput) to implement the new stream
API callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle streamed
in-progress transaction by spilling the data to disk and then
replaying them on commit.
We however must explicitly disable streaming replication during
replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover we don't have a replication connection open so we
don't have where to send the data anyway.
Author: Tomas Vondra, Dilip Kumar and Amit Kapila
Reviewed-by: Amit Kapila, Kuntal Ghosh and Ajin Cherian
Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
This splits a string at occurrences of a delimiter. It is exactly like
string_to_array() except for producing a set of values instead of an
array of values. Thus, the relationship of these two functions is
the same as between regexp_split_to_table() and regexp_split_to_array().
Although the same results could be had from unnest(string_to_array()),
this is somewhat faster than that, and anyway it seems reasonable to
have it for symmetry with the regexp functions.
Pavel Stehule, reviewed by Peter Smith
Discussion: https://postgr.es/m/CAFj8pRD8HOpjq2TqeTBhSo_QkzjLOhXzGCpKJ4nCs7Y9SQkuPw@mail.gmail.com
Listing a full set of relations with those psql meta-commands, without a
matching pattern, has never showed the access method associated with
each relation. This commit adds the access method of tables, indexes
and matviews, masking it for relation kinds where it does not apply.
Note that when HIDE_TABLEAM is enabled, the information does not show
up. This is available when connecting to a backend version of at least
12, where table AMs have been introduced.
Author: Georgios Kokolatos
Reviewed-by: Vignesh C, Michael Paquier, Justin Pryzby
Discussion: https://postgr.es/m/svaS1VTOEscES9CLKVTeKItjJP1EEJuBhTsA0ESOdlnbXeQSgycYwVlliL5zt8Jwcfo4ATYDXtEqsExxjkSkkhCSTCL8fnRgaCAJdr0unUg=@protonmail.com
When multiple relations are reindexed, a scan of pg_class is done first
to build the list of relations to work on. However the REINDEX logic
has never checked if a relation listed still exists when beginning the
work on it, causing for example sudden cache lookup failures.
This commit adds safeguards against dropped relations for REINDEX,
similarly to VACUUM or CLUSTER where we try to open the relation,
ignoring it if it is missing. A new option is added to the REINDEX
routines to control if a missed relation is OK to ignore or not.
An isolation test, based on REINDEX SCHEMA, is added for the concurrent
and non-concurrent cases.
Author: Michael Paquier
Reviewed-by: Anastasia Lubennikova
Discussion: https://postgr.es/m/20200813043805.GE11663@paquier.xyz
Add a test case that exercises vacuum's deletion of empty GIN
posting pages. Since this is a temp table, it should now work
reliably to delete a bunch of rows and immediately VACUUM.
Before the preceding commit, this would not have had the desired
effect, at least not in parallel regression tests.
Discussion: https://postgr.es/m/3490536.1598629609@sss.pgh.pa.us
We were already raising an error for DROP INDEX CONCURRENTLY on a
partitioned table, albeit a different and confusing one:
ERROR: DROP INDEX CONCURRENTLY must be first action in transaction
Change that to throw a more comprehensible error:
ERROR: cannot drop partitioned index \"%s\" concurrently
Michael Paquier authored the test case for indexes on temporary
partitioned tables.
Backpatch to 11, where indexes on partitioned tables were added.
Reported-by: Jan Mussler <jan.mussler@zalando.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/16594-d2956ca909585067@postgresql.org
Historically there's been a hard-wired assumption here that no line of
a .pgpass file could be as long as NAMEDATALEN*5 bytes. That's a bit
shaky to start off with, because (a) there's no reason to suppose that
host names fit in NAMEDATALEN, and (b) this figure fails to allow for
backslash escape characters. However, it fails completely if someone
wants to use a very long password, and we're now hearing reports of
people wanting to use "security tokens" that can run up to several
hundred bytes. Another angle is that the file is specified to allow
comment lines, but there's no reason to assume that long comment lines
aren't possible.
Rather than guessing at what might be a more suitable limit, let's
replace the fixed-size buffer with an expansible PQExpBuffer. That
adds one malloc/free cycle to the typical use-case, but that's surely
pretty cheap relative to the I/O this code has to do.
Also, add TAP test cases to exercise this code, because there was no
test coverage before.
This reverts most of commit 2eb3bc588, as there's no longer a need for
a warning message about overlength .pgpass lines. (I kept the explicit
check for comment lines, though.)
In HEAD and v13, this also fixes an oversight in 74a308cf5: there's not
much point in explicit_bzero'ing the line buffer if we only do so in two
of the three exit paths.
Back-patch to all supported branches, except that the test case only
goes back to v10 where src/test/authentication/ was added.
Discussion: https://postgr.es/m/4187382.1598909041@sss.pgh.pa.us
collectMatchBitmap() needs to re-find the index tuple it was previously
looking at, after transiently dropping lock on the index page it's on.
The tuple should still exist and be at its prior position or somewhere
to the right of that, since ginvacuum never removes tuples but
concurrent insertions could add one. However, there was a thinko in
that logic, to the effect of expecting any inserted tuples to have the
same index "attnum" as what we'd been scanning. Since there's no
physical separation of tuples with different attnums, it's not terribly
hard to devise scenarios where this fails, leading to transient "lost
saved point in index" errors. (While I've duplicated this with manual
testing, it seems impossible to make a reproducible test case with our
available testing technology.)
Fix by just continuing the scan when the attnum doesn't match.
While here, improve the error message used if we do fail, so that it
matches the wording used in btree for a similar case.
collectMatchBitmap()'s posting-tree code path was previously not
exercised at all by our regression tests. While I can't make
a regression test that exhibits the bug, I can at least improve
the code coverage here, so do that. The test case I made for this
is an extension of one added by 4b754d6c1, so it only works in
HEAD and v13; didn't seem worth trying hard to back-patch it.
Per bug #16595 from Jesse Kinkead. This has been broken since
multicolumn capability was added to GIN (commit 27cb66fdf),
so back-patch to all supported branches.
Discussion: https://postgr.es/m/16595-633118be8eef9ce2@postgresql.org
The trouble with doing this is that an apparently-constant subquery
output column isn't really constant if it is a grouping column that
appears in only some of the grouping sets. A qual using such a
column would be subject to incorrect const-folding after push-down,
as seen in bug #16585 from Paul Sivash.
To fix, just disable qual pushdown altogether if the sub-query has
nonempty groupingSets. While we could imagine far less restrictive
solutions, there is not much point in working harder right now,
because subquery_planner() won't move HAVING clauses to WHERE within
such a subquery. If the qual stays in HAVING it's not going to be
a lot more useful than if we'd kept it at the outer level.
Having said that, this restriction could be removed if we used a
parsetree representation that distinguished such outputs from actual
constants, which is something I hope to do in future. Hence, make
the patch a minimal addition rather than integrating it more tightly
(e.g. by renumbering the existing items in subquery_is_pushdown_safe's
comment).
Back-patch to 9.5 where grouping sets were introduced.
Discussion: https://postgr.es/m/16585-9d8c340d23ade8c1@postgresql.org
Commit 1281a5c90 rearranged the logic in this area rather drastically,
and it broke the case of adding a foreign key constraint in the same
ALTER that adds the pkey or unique constraint it depends on. While
self-referential fkeys are surely a pretty niche case, this used to
work so we shouldn't break it.
To fix, reorganize the scheduling rules in ATParseTransformCmd so
that a transformed AT_AddConstraint subcommand will be delayed into
a later pass in all cases, not only when it's been spit out as a
side-effect of parsing some other command type.
Also tweak the logic so that we won't run ATParseTransformCmd twice
while doing this. It seems to work even without that, but it's
surely wasting cycles to do so.
Per bug #16589 from Jeremy Evans. Back-patch to v13 where the new
code was introduced.
Discussion: https://postgr.es/m/16589-31c8d981ca503896@postgresql.org
If a CREATE TABLE command uses both LIKE and traditional inheritance,
Vars in CHECK constraints and expression indexes that are absorbed
from a LIKE parent table tended to get mis-numbered, resulting in
wrong answers and/or bizarre error messages (though probably not any
actual crashes, thanks to validation occurring in the executor).
In v12 and up, the same could happen to Vars in GENERATED expressions,
even in cases with no LIKE clause but multiple traditional-inheritance
parents.
The cause of the problem for LIKE is that parse_utilcmd.c supposed
it could renumber such Vars correctly during transformCreateStmt(),
which it cannot since we have not yet accounted for columns added via
inheritance. Fix that by postponing processing of LIKE INCLUDING
CONSTRAINTS, DEFAULTS, GENERATED, INDEXES till after we've performed
DefineRelation().
The error with GENERATED and multiple inheritance is a simple oversight
in MergeAttributes(); it knows it has to renumber Vars in inherited
CHECK constraints, but forgot to apply the same processing to inherited
GENERATED expressions (a/k/a defaults).
Per bug #16272 from Tom Gottfried. The non-GENERATED variants of the
issue are ancient, presumably dating right back to the addition of
CREATE TABLE LIKE; hence back-patch to all supported branches.
Discussion: https://postgr.es/m/16272-6e32da020e9a9381@postgresql.org
Commit 9d701e624f caused the regression test for EXPLAIN to fail on
the buildfarm member prion. This happened because of instability of
test output, i.e., in text format, whether "Planning:" line is output
varies depending on the system state.
This commit updated the regression test so that it ignores that
"Planning:" line to produce more stable test output and get rid of
the test failure.
Back-patch to v13.
Author: Fujii Masao
Discussion: https://postgr.es/m/1803897.1598021621@sss.pgh.pa.us
Commit ce77abe63c allowed EXPLAIN (BUFFERS) to report the information
on buffer usage during planning phase. However three issues were
reported regarding this feature.
(1) Previously, EXPLAIN option BUFFERS required ANALYZE. So the query
had to be actually executed by specifying ANALYZE even when we
want to see only the planner's buffer usage. This was inconvenient
especially when the query was write one like DELETE.
(2) EXPLAIN included the planner's buffer usage in summary
information. So SUMMARY option had to be enabled to report that.
Also this format was confusing.
(3) The output structure for planning information was not consistent
between TEXT format and the others. For example, "Planning" tag
was output in JSON format, but not in TEXT format.
For (1), this commit allows us to perform EXPLAIN (BUFFERS) without
ANALYZE to report the planner's buffer usage.
For (2), this commit changed EXPLAIN output so that the planner's
buffer usage is reported before summary information.
For (3), this commit made the output structure for planning
information more consistent between the formats.
Back-patch to v13 where the planner's buffer usage was allowed to
be reported in EXPLAIN.
Reported-by: Pierre Giraud, David Rowley
Author: Fujii Masao
Reviewed-by: David Rowley, Julien Rouhaud, Pierre Giraud
Discussion: https://postgr.es/m/07b226e6-fa49-687f-b110-b7c37572f69e@dalibo.com
This view displays the usages of all the memory contexts of the server
process attached to the current session. This information is useful to
investigate the cause of backend-local memory bloat.
This information can be also collected by calling
MemoryContextStats(TopMemoryContext) via a debugger. But this technique
cannot be uesd in some environments because no debugger is available there.
And it outputs lots of text messages and it's not easy to analyze them.
So, pg_backend_memory_contexts view allows us to access to backend-local
memory contexts information more easily.
Bump catalog version.
Author: Atsushi Torikoshi, Fujii Masao
Reviewed-by: Tatsuhito Kasahara, Andres Freund, Daniel Gustafsson, Robert Haas, Michael Paquier
Discussion: https://postgr.es/m/72a656e0f71d0860161e0b3f67e4d771@oss.nttdata.com
PL/Sample is an example template of procedural-language handler. This
can be used as a base to implement a custom PL, or as a facility to test
APIs dedicated to PLs. Much more could be done in this module, like
adding a simple validator, but this is left as future work.
The documentation included originally some C code to understand the
basics of PL handler implementation, but it was outdated, and not really
helpful either if trying to implement a new procedural language,
particularly when it came to the integration of a PL installation with
CREATE EXTENSION.
Author: Mark Wong
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20200612172648.GA3327@2ndQuadrant.com
Reuse cautionary language from src/test/ssl/README in
src/test/kerberos/README. SLRUs have had access to six-character
segments names since commit 73c986adde5d73a5e2555da9b5c8facedb146dcd,
and recovery stopped calling HeapTupleHeaderAdvanceLatestRemovedXid() in
commit 558a9165e081d1936573e5a7d576f5febd7fb55a. The other corrections
are more self-evident.
nodeSubplan.c expects that the testexpr for a hashable ANY SubPlan
has the form of one or more OpExprs whose LHS is an expression of the
outer query's, while the RHS is an expression over Params representing
output columns of the subquery. However, the planner only went as far
as verifying that the clauses were all binary OpExprs. This works
99.99% of the time, because the clauses have the right shape when
emitted by the parser --- but it's possible for function inlining to
break that, as reported by PegoraroF10. To fix, teach the planner
to check that the LHS and RHS contain the right things, or more
accurately don't contain the wrong things. Given that this has been
broken for years without anyone noticing, it seems sufficient to just
give up hashing when it happens, rather than go to the trouble of
commuting the clauses back again (which wouldn't necessarily work
anyway).
While poking at that, I also noticed that nodeSubplan.c had a baked-in
assumption that the number of hash clauses is identical to the number
of subquery output columns. Again, that's fine as far as parser output
goes, but it's not hard to break it via function inlining. There seems
little reason for that assumption though --- AFAICS, the only thing
it's buying us is not having to store the number of hash clauses
explicitly. Adding code to the planner to reject such cases would take
more code than getting nodeSubplan.c to cope, so I fixed it that way.
This has been broken for as long as we've had hashable SubPlans,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/1549209182255-0.post@n3.nabble.com
They are equivalent, except that StrNCpy() zero-fills the entire
destination buffer instead of providing just one trailing zero. For
all but a tiny number of callers, that's just overhead rather than
being desirable.
Remove StrNCpy() as it is now unused.
In some cases, namestrcpy() is the more appropriate function to use.
While we're here, simplify the API of namestrcpy(): Remove the return
value, don't check for NULL input. Nothing was using that anyway.
Also, remove a few unused name-related functions.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/44f5e198-36f6-6cdb-7fa9-60e34784daae%402ndquadrant.com
This is like CVE-2018-1058 commit
582edc369cdbd348d68441fc50fa26a84afd0c1a. Today, a malicious user of a
publisher or subscriber database can invoke arbitrary SQL functions
under an identity running replication, often a superuser. This fix may
cause "does not exist" or "no schema has been selected to create in"
errors in a replication process. After upgrading, consider watching
server logs for these errors. Objects accruing schema qualification in
the wake of the earlier commit are unlikely to need further correction.
Back-patch to v10, which introduced logical replication.
Security: CVE-2020-14349
We have various cases where we allow DDL on tables to be performed with
less than full AccessExclusiveLock. This requires concurrent queries
to be able to cope with the DDL change mid-flight, but up to now we had
no repeatable way to test such cases. To improve that, invent a test
module that allows halting a backend after planning and then resuming
execution once we've done desired actions in another session. (The same
approach could be used to inject delays in other places, if there's a
suitable hook available.)
This commit includes a single test case, which is meant to exercise the
previously-untestable ExecCreatePartitionPruneState code repaired by
commit 7a980dfc6. We'd probably not bother with this if that were the
only foreseen benefit, but I expect additional test cases will use this
infrastructure in the future.
Test module by Andy Fan, partition-addition test case by me.
Discussion: https://postgr.es/m/20200802181131.GA27754@telsasoft.com
Commit 13838740f fixed some issues with step generation in partition
pruning, but there was yet another one: get_steps_using_prefix() assumes
that clauses in the passed-in prefix list are sorted in ascending order
of their partition key numbers, but the caller failed to ensure this for
range partitioning, which led to an assertion failure in debug builds.
Adjust the caller function to arrange the clauses in the prefix list in
the required order for range partitioning.
Back-patch to v11, like the previous commit.
Patch by me, reviewed by Amit Langote.
Discussion: https://postgr.es/m/CAPmGK16jkXiFG0YqMbU66wte-oJTfW6D1HaNvQf%3D%2B5o9%3Dm55wQ%40mail.gmail.com