We've had numerous bug reports about how (1) IF NOT EXISTS clauses in
ALTER TABLE don't behave as-expected, and (2) combining certain actions
into one ALTER TABLE doesn't work, though executing the same actions as
separate statements does. This patch cleans up all of the cases so far
reported from the field, though there are still some oddities associated
with identity columns.
The core problem behind all of these bugs is that we do parse analysis
of ALTER TABLE subcommands too soon, before starting execution of the
statement. The root of the bugs in group (1) is that parse analysis
schedules derived commands (such as a CREATE SEQUENCE for a serial
column) before it's known whether the IF NOT EXISTS clause should cause
a subcommand to be skipped. The root of the bugs in group (2) is that
earlier subcommands may change the catalog state that later subcommands
need to be parsed against.
Hence, postpone parse analysis of ALTER TABLE's subcommands, and do
that one subcommand at a time, during "phase 2" of ALTER TABLE which
is the phase that does catalog rewrites. Thus the catalog effects
of earlier subcommands are already visible when we analyze later ones.
(The sole exception is that we do parse analysis for ALTER COLUMN TYPE
subcommands during phase 1, so that their USING expressions can be
parsed against the table's original state, which is what we need.
Arguably, these bugs stem from falsely concluding that because ALTER
COLUMN TYPE must do early parse analysis, every other command subtype
can too.)
This means that ALTER TABLE itself must deal with execution of any
non-ALTER-TABLE derived statements that are generated by parse analysis.
Add a suitable entry point to utility.c to accept those recursive
calls, and create a struct to pass through the information needed by
the recursive call, rather than making the argument lists of
AlterTable() and friends even longer.
Getting this to work correctly required a little bit of fiddling
with the subcommand pass structure, in particular breaking up
AT_PASS_ADD_CONSTR into multiple passes. But otherwise it's mostly
a pretty straightforward application of the above ideas.
Fixing the residual issues for identity columns requires refactoring of
where the dependency link from an identity column to its sequence gets
set up. So that seems like suitable material for a separate patch,
especially since this one is pretty big already.
Discussion: https://postgr.es/m/10365.1558909428@sss.pgh.pa.us
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for ANALYZE.
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Co-authored-by: Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>
Reviewed-by: Julien Rouhaud, Robert Haas, Anthony Nowocien, Kyotaro Horiguchi,
Vignesh C, Amit Langote
Introduce new fields amusemaintenanceworkmem and amparallelvacuumoptions
in IndexAmRoutine for parallel vacuum. The amusemaintenanceworkmem tells
whether a particular IndexAM uses maintenance_work_mem or not. This will
help in controlling the memory used by individual workers as otherwise,
each worker can consume memory equal to maintenance_work_mem. The
amparallelvacuumoptions tell whether a particular IndexAM participates in
a parallel vacuum and if so in which phase (bulkdelete, vacuumcleanup) of
vacuum.
Author: Masahiko Sawada and Amit Kapila
Reviewed-by: Dilip Kumar, Amit Kapila, Tomas Vondra and Robert Haas
Discussion:
https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.comhttps://postgr.es/m/CAA4eK1LmcD5aPogzwim5Nn58Ki+74a6Edghx4Wd8hAskvHaq5A@mail.gmail.com
This reverts part of commit 7559d8ebfa. The copyright script has
already been updated to skip *.key files.
Reported-by: Alvaro Herrera
Discussion: https://postgr.es/m/20200102184059.GA25435@alvherre.pgsql
Backpatch-through: master
A view with conditional INSTEAD rules and no unconditional INSTEAD
rules or INSTEAD OF triggers is not auto-updatable. Previously we
relied on a check in the executor to catch this, but that's
problematic since the planner may fail to properly handle such a query
and thus return a particularly unhelpful error to the user, before
reaching the executor check.
Instead, trap this in the rewriter and report the correct error there.
Doing so also allows us to include more useful error detail than the
executor check can provide. This doesn't change the existing behaviour
of updatable views; it merely ensures that useful error messages are
reported when a view isn't updatable.
Per report from Pengzhou Tang, though not adopting that suggested fix.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=trYr4Kn8_3_PEA@mail.gmail.com
This test was trying to test the mechanism to release kernel FDs as needed
to get us under the max_safe_fds limit in case of spill files. To do that,
it needs to set max_files_per_process to a very low value which doesn't
even permit starting of the server in the case when there are a few already
opened files. This test also won't work on platforms where we use one FD
per semaphore.
Backpatch-through: 10, till where this test was added
Discussion:
https://postgr.es/m/CAA4eK1LHhERi06Q+MmP9qBXBBboi+7WV3910J0aUgz71LcnKAw@mail.gmail.comhttps://postgr.es/m/6485.1578583522@sss.pgh.pa.us
Previously, the core scanner's yy_transition[] array had 37045 elements.
Since that number is larger than INT16_MAX, Flex generated the array to
contain 32-bit integers. By reimplementing some of the bulkier scanner
rules, this patch reduces the array to 20495 elements. The much smaller
total length, combined with the consequent use of 16-bit integers for
the array elements reduces the binary size by over 200kB. This was
accomplished in two ways:
1. Consolidate handling of quote continuations into a new start condition,
rather than duplicating that logic for five different string types.
2. Treat Unicode strings and identifiers followed by a UESCAPE sequence
as three separate tokens, rather than one. The logic to de-escape
Unicode strings is moved to the filter code in parser.c, which already
had the ability to provide special processing for token sequences.
While we could have implemented the conversion in the grammar, that
approach was rejected for performance and maintainability reasons.
Performance in microbenchmarks of raw parsing seems equal or slightly
faster in most cases, and it's reasonable to expect that in real-world
usage (with more competition for the CPU cache) there will be a larger
win. The exception is UESCAPE sequences; lexing those is about 10%
slower, primarily because the scanner now has to be called three times
rather than one. This seems acceptable since that feature is very
rarely used.
The psql and epcg lexers are likewise modified, primarily because we
want to keep them all in sync. Since those lexers don't use the
space-hogging -CF option, the space savings is much less, but it's
still good for perhaps 10kB apiece.
While at it, merge the ecpg lexer's handling of C-style comments used
in SQL and in C. Those have different rules regarding nested comments,
but since we already have the ability to keep track of the previous
start condition, we can use that to handle both cases within a single
start condition. This matches the core scanner more closely.
John Naylor
Discussion: https://postgr.es/m/CACPNZCvaoa3EgVWm5yZhcSTX6RAtaLgniCPcBVOCwm8h3xpWkw@mail.gmail.com
Until now we've only used a single multivariate MCV list per relation,
covering the largest number of clauses. So for example given a query
SELECT * FROM t WHERE a = 1 AND b =1 AND c = 1 AND d = 1
and extended statistics on (a,b) and (c,d), we'd only pick and use one
of them. This commit improves this by repeatedly picking and applying
the best statistics (matching the largest number of remaining clauses)
until no additional statistics is applicable.
This greedy algorithm is simple, but may not be optimal. A different
choice of statistics may leave fewer clauses unestimated and/or give
better estimates for some other reason.
This can however happen only when there are overlapping statistics, and
selecting one makes it impossible to use the other. E.g. with statistics
on (a,b), (c,d), (b,c,d), we may pick either (a,b) and (c,d) or (b,c,d).
But it's not clear which option is the best one.
We however assume cases like this are rare, and the easiest solution is
to define statistics covering the whole group of correlated columns. In
the future we might support overlapping stats, using some of the clauses
as conditions (in conditional probability sense).
Author: Tomas Vondra
Reviewed-by: Mark Dilger, Kyotaro Horiguchi
Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
When considering functional dependencies during selectivity estimation,
it's not necessary to bother with selecting the best extended statistic
object and then use just dependencies from it. We can simply consider
all applicable functional dependencies at once.
This means we need to deserialie all (applicable) dependencies before
applying them to the clauses. This is a bit more expensive than picking
the best statistics and deserializing dependencies for it. To minimize
the additional cost, we ignore statistics that are not applicable.
Author: Tomas Vondra
Reviewed-by: Mark Dilger
Discussion: https://postgr.es/m/20191028152048.jc6pqv5hb7j77ocp@development
On the publisher, it was assumed that an INSERT change cannot happen for
a relation with no replica identity. However this is true only for a
change that needs references to old rows, aka UPDATE or DELETE, so
trying to use logical replication with a relation that has no replica
identity led to an assertion failure in the publisher when issuing an
INSERT. This commit removes the incorrect assertion, and adds more
regression tests to provide coverage for relations without replica
identity.
Reported-by: Neha Sharma
Author: Dilip Kumar, Michael Paquier
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CANiYTQsL1Hb8_Km08qd32svrqNumXLJeoGo014O7VZymgOhZEA@mail.gmail.com
Backpatch-through: 10
Fix assorted bugs in handling of non-blocking I/O when using GSSAPI
encryption. The encryption layer could return the wrong status
information to its caller, resulting in effectively dropping some data
(or possibly in aborting a not-broken connection), or in a "livelock"
situation where data remains to be sent but the upper layers think
transmission is done and just go to sleep. There were multiple small
thinkos contributing to that, as well as one big one (failure to think
through what to do when a send fails after having already transmitted
data). Note that these errors could cause failures whether the client
application asked for non-blocking I/O or not, since both libpq and
the backend always run things in non-block mode at this level.
Also get rid of use of static variables for GSSAPI inside libpq;
that's entirely not okay given that multiple connections could be
open at once inside a single client process.
Also adjust a bunch of random small discrepancies between the frontend
and backend versions of the send/receive functions -- except for error
handling, they should be identical, and now they are.
Also extend the Kerberos TAP tests to exercise cases where nontrivial
amounts of data need to be pushed through encryption. Before, those
tests didn't provide any useful coverage at all for the cases of
interest here. (They still might not, depending on timing, but at
least there's a chance.)
Per complaint from pmc@citylink and subsequent investigation.
Back-patch to v12 where this code was introduced.
Discussion: https://postgr.es/m/20200109181822.GA74698@gate.oper.dinoex.org
This tells you about allocations that have been made from the main
shared memory segment. The original patch also tried to show information
about dynamic shared memory allocation as well, but I decided to
leave that problem for another time.
Andres Freund and Robert Haas, reviewed by Michael Paquier, Marti
Raudsepp, Tom Lane, Álvaro Herrera, and Kyotaro Horiguchi.
Discussion: http://postgr.es/m/20140504114417.GM12715@awork2.anarazel.de
Use the parser's standard type coercion machinery to convert the
output column(s) of a SQL function's final SELECT or RETURNING
to the type(s) they should have according to the function's declared
result type. We'll allow any case where an assignment-level
coercion is available. Previously, we failed unless the required
coercion was a binary-compatible one (and the documentation ignored
this, falsely claiming that the types must match exactly).
Notably, the coercion now accounts for typmods, so that cases where
a SQL function is declared to return a composite type whose columns
are typmod-constrained now behave as one would expect. Arguably
this aspect is a bug fix, but the overall behavioral change here
seems too large to consider back-patching.
A nice side-effect is that functions can now be inlined in a
few cases where we previously failed to do so because of type
mismatches.
Discussion: https://postgr.es/m/18929.1574895430@sss.pgh.pa.us
ALTER TABLE failed if a column referenced in a GENERATED expression
had been added or changed in type earlier in the ALTER command.
That's because the GENERATED expression needs to be evaluated
against the table's updated tuples, but it was being evaluated
against the original tuples. (Fortunately the executor has adequate
cross-checks to notice the mismatch, so we just got an obscure error
message and not anything more dangerous.)
Per report from Andreas Joseph Krogh. Back-patch to v12 where
GENERATED was added.
Discussion: https://postgr.es/m/VisenaEmail.200.231b0a41523275d0.16ea7f800c7@tc7-visena
Instead of hard-wiring the netmask as /32, allow it to be specified
where we specify the server address. This will ease changing the
test to use IPv6, when/if somebody wants to do that.
Also remove the hard-wired pg_hba.conf entries for IPv6 (::1/128).
These have never had any usefulness, because the client side
of the tests has always explicitly connected to $SERVERHOSTADDR
which has always been set to IPv4 (127.0.0.1). All they accomplish
is to break the test on non-IPv6-supporting hosts, and besides
that they violate the express intent of the code to minimize the
server's range of allowed connections.
This could be back-patched, perhaps, but for now I don't see
a need to.
Discussion: https://postgr.es/m/1899.1578356089@sss.pgh.pa.us
Support is out of scope from all the major vendors for these versions
(for example RHEL5 uses a version based on 0.9.8, and RHEL6 uses 1.0.1),
and it created some extra maintenance work. Upstream has stopped
support of 0.9.8 in December 2015 and of 1.0.0 in February 2016.
Since b1abfec, note that the default SSL protocol version set with
ssl_min_protocol_version is TLSv1.2, whose support was added in OpenSSL
1.0.1, so there is no point to enforce ssl_min_protocol_version to TLSv1
in the SSL tests.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson, Tom Lane
Discussion: https://postgr.es/m/20191205083252.GE5064@paquier.xyz
When row triggers exist in partitioned partitions that are not either
part of FKs or deferred unique constraints, they are not correctly
cloned to their partitions. That's because they are marked "internal",
and those are purposefully skipped when doing the clone triggers dance.
Fix by relaxing the condition on which internal triggers are skipped.
Amit Langote initially diagnosed the problem and proposed a fix, but I
used a different approach.
Reported-by: Petr Fedorov
Discussion: https://postgr.es/m/6b3f0646-ba8c-b3a9-c62d-1c6651a1920f@phystech.edu
Up to now, psql's tab-complete.c has had exactly no regression test
coverage. This patch is an experimental attempt to add some.
This needs Perl's IO::Pty module, which isn't installed everywhere,
so the test script just skips all tests if that's not present.
There may be other portability gotchas too, so I await buildfarm
results with interest.
So far this just covers a few very basic keyword-completion and
query-driven-completion scenarios, which should be enough to let us
get a feel for whether this is practical at all from a portability
standpoint. If it is, there's lots more that can be done.
Discussion: https://postgr.es/m/10967.1577562752@sss.pgh.pa.us
When I added the ParseNamespaceItem data structure (in commit 5ebaaa494),
it wasn't very tightly integrated into the parser's APIs. In the wake of
adding p_rtindex to that struct (commit b541e9acc), there is a good reason
to make more use of it: by passing around ParseNamespaceItem pointers
instead of bare RTE pointers, we can get rid of various messy methods for
passing back or deducing the rangetable index of an RTE during parsing.
Hence, refactor the addRangeTableEntryXXX functions to build and return
a ParseNamespaceItem struct, not just the RTE proper; and replace
addRTEtoQuery with addNSItemToQuery, which is passed a ParseNamespaceItem
rather than building one internally.
Also, add per-column data (a ParseNamespaceColumn array) to each
ParseNamespaceItem. These arrays are built during addRangeTableEntryXXX,
where we have column type data at hand so that it's nearly free to fill
the data structure. Later, when we need to build Vars referencing RTEs,
we can use the ParseNamespaceColumn info to avoid the rather expensive
operations done in get_rte_attribute_type() or expandRTE().
get_rte_attribute_type() is indeed dead code now, so I've removed it.
This makes for a useful improvement in parse analysis speed, around 20%
in one moderately-complex test query.
The ParseNamespaceColumn structs also include Var identity information
(varno/varattno). That info isn't actually being used in this patch,
except that p_varno == 0 is a handy test for a dropped column.
A follow-on patch will make more use of it.
Discussion: https://postgr.es/m/2461.1577764221@sss.pgh.pa.us
Currently while decoding changes, if the number of changes exceeds a
certain threshold, we spill those to disk. And this happens for each
(sub)transaction. Now, while reading all these files, we don't close them
until we read all the files. While reading these files, if the number of
such files exceeds the maximum number of file descriptors, the operation
errors out.
Use PathNameOpenFile interface to open these files as that internally has
the mechanism to release kernel FDs as needed to get us under the
max_safe_fds limit.
Reported-by: Amit Khandekar
Author: Amit Khandekar
Reviewed-by: Amit Kapila
Backpatch-through: 9.4
Discussion: https://postgr.es/m/CAJ3gD9c-sECEn79zXw4yBnBdOttacoE-6gAyP0oy60nfs_sabQ@mail.gmail.com
If we have, say, an int column that is left-joined to a bigint column
with USING, the merged column is the int column promoted to bigint.
GROUP BY's tests for whether grouping on the merged column allows a
reference to the underlying column, or vice versa, should know about
that relationship --- and they do. But I nearly broke this case with
an ill-advised optimization, so the lack of any test coverage for it
seems like a bad idea.
This follows multiple complains from Peter Geoghegan, Andres Freund and
Alvaro Herrera that this issue ought to be dug more before actually
happening, if it happens.
Discussion: https://postgr.es/m/20191226144606.GA5659@alvherre.pgsql
In the wake of commit 5b9312378, there's no particular reason
for this restriction (previously, it was problematic because of
the implied rowtype reference). A simple constraint on a whole-row
Var probably isn't that useful, but conceivably somebody would want
to pass one to a function that extracts a partitioning key. Besides
which, we're expending much more code to enforce the restriction than
we save by having it, since the latter quantity is now zero.
So drop the restriction.
Amit Langote
Discussion: https://postgr.es/m/CA+HiwqFUzjfj9HEsJtYWcr1SgQ_=iCAvQ=O2Sx6aQxoDu4OiHw@mail.gmail.com
Formerly the rd_partkey and rd_partdesc data structures were always
populated immediately when a relcache entry was built or rebuilt.
This patch changes things so that they are populated only when they
are first requested. (Hence, callers *must* now always use
RelationGetPartitionKey or RelationGetPartitionDesc; just fetching
the pointer directly is no longer acceptable.)
This seems to have some performance benefits, but the main reason to do
it is that it eliminates a recursive-reload failure that occurs if the
partkey or partdesc expressions contain any references to the relation's
rowtype (as discovered by Amit Langote). In retrospect, since loading
these data structures might result in execution of nearly-arbitrary code
via eval_const_expressions, it was a dumb idea to require that to happen
during relcache entry rebuild.
Also, fix things so that old copies of a relcache partition descriptor
will be dropped when the cache entry's refcount goes to zero. In the
previous coding it was possible for such copies to survive for the
lifetime of the session, as I'd complained of in a previous discussion.
(This management technique still isn't perfect, but it's better than
before.) Improve the commentary explaining how that works and why
it's safe to hand out direct pointers to these relcache substructures.
In passing, improve RelationBuildPartitionDesc by using the same
memory-context-parent-swap approach used by RelationBuildPartitionKey,
thereby making it less dependent on strong assumptions about what
partition_bounds_copy does. Avoid doing get_rel_relkind in the
critical section, too.
Patch by Amit Langote and Tom Lane; Robert Haas deserves some credit
for prior work in the area, too. Although this is a pre-existing
problem, no back-patch: the patch seems too invasive to be safe to
back-patch, and the bug it fixes is a corner case that seems
relatively unlikely to cause problems in the field.
Discussion: https://postgr.es/m/CA+HiwqFUzjfj9HEsJtYWcr1SgQ_=iCAvQ=O2Sx6aQxoDu4OiHw@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoY3bRmGB6-DUnoVy5fJoreiBJ43rwMrQRCdPXuKt4Ykaw@mail.gmail.com
The following renaming is done so as source files related to index
access methods are more consistent with table access methods (the
original names used for index AMs ware too generic, and could be
confused as including features related to table AMs):
- amapi.h -> indexam.h.
- amapi.c -> indexamapi.c. Here we have an equivalent with
backend/access/table/tableamapi.c.
- amvalidate.c -> indexamvalidate.c.
- amvalidate.h -> indexamvalidate.h.
- genam.c -> indexgenam.c.
- genam.h -> indexgenam.h.
This has been discussed during the development of v12 when table AM was
worked on, but the renaming never happened.
Author: Michael Paquier
Reviewed-by: Fabien Coelho, Julien Rouhaud
Discussion: https://postgr.es/m/20191223053434.GF34339@paquier.xyz
This wasn't checked originally, but it should have been, because
in general pseudo-types can't be stored to and retrieved from disk.
Notably, partition bound values of type "record" would not be
interpretable by another session.
In v12 and HEAD, add another flag to CheckAttributeType's repertoire
so that it can produce a specific error message for this case. That's
infeasible in older branches without an ABI break, so fall back to
a slightly-less-nicely-worded error message in v10 and v11.
Problem noted by Amit Langote, though this patch is not his initial
solution. Back-patch to v10 where partitioning was introduced.
Discussion: https://postgr.es/m/CA+HiwqFUzjfj9HEsJtYWcr1SgQ_=iCAvQ=O2Sx6aQxoDu4OiHw@mail.gmail.com
We probably should have thought of this case when ranges were added,
but we didn't. (It's not the fault of commit eb51af71f, because
ranges didn't exist then.)
It's an old bug, so back-patch to all supported branches.
Discussion: https://postgr.es/m/7782.1577051475@sss.pgh.pa.us
If the first transaction block in these tests were entered exactly
at midnight (California time), they'd report a bogus failure due
to 'now' and 'midnight' having the same values. Commit 8c2ac75c5
had dismissed this as being of negligible probability, but we've
now seen it happen in the buildfarm, so let's prevent it. We can
get pretty much the same test coverage without an it's-not-midnight
assumption by moving the does-'now'-work cases into their own test step.
While here, apply commit 47169c255's s/DELETE/TRUNCATE/ change to
timestamptz as well as timestamp (not sure why that didn't
occur to me at the time; the risk of failure is the same).
Back-patch to all supported branches, since the main point is
to get rid of potential buildfarm failures.
Discussion: https://postgr.es/m/14821.1577031117@sss.pgh.pa.us
If CheckAttributeType() threw an error about the datatype of an
index expression column, it would report an empty column name,
which is pretty unhelpful and certainly not the intended behavior.
I (tgl) evidently broke this in commit cfc5008a5, by not noticing
that the column's attname was used above where I'd placed the
assignment of it.
In HEAD and v12, this is trivially fixable by moving up the
assignment of attname. Before v12 the code is a bit more messy;
to avoid doing substantial refactoring, I took the lazy way out
and just put in two copies of the assignment code.
Report and patch by Amit Langote. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/CA+HiwqFA+BGyBFimjiYXXMa2Hc3fcL0+OJOyzUNjhU4NCa_XXw@mail.gmail.com
force_parallel_mode = regress is supposed to force use of a Gather
node without having any impact on EXPLAIN output. But it failed to
accomplish that if both ANALYZE and VERBOSE are given, because that
enables per-worker output data that you wouldn't see if the Gather
hadn't been inserted. Improve the logic so that we suppress the
per-worker data too.
This allows putting the new test case added by commit 5935917ce
back into the originally intended form (cf. 776a2c887, 22864f6e0).
We can also get rid of a kluge in subselect.sql, which previously
had to clean up after force_parallel_mode's failure to do what it
said on the tin.
Discussion: https://postgr.es/m/18445.1576177309@sss.pgh.pa.us
It appears that a concurrent autovacuum/autoanalyze run can cause
changes in the plans expected by this test. To prevent that, change
the tables it uses to be temp tables --- there's no need for them
to be permanent, and this should save a few cycles too.
Discussion: https://postgr.es/m/3244.1576160824@sss.pgh.pa.us
The RTE_RESULT simplification logic added by commit 4be058fe9 had a
flaw: it would collapse out a RTE_RESULT that is due to compute a
PlaceHolderVar, and reassign the PHV to the parent join level, even if
another input relation of the join contained a lateral reference to
the PHV. That can't work because the PHV would be computed too late.
In practice it led to failures of internal sanity checks later in
planning (either assertion failures or errors such as "failed to
construct the join relation").
To fix, add code to check for the presence of such PHVs in relevant
portions of the query tree. Notably, this required refactoring
range_table_walker so that a caller could ask to walk individual RTEs
not the whole list. (It might be a good idea to refactor
range_table_mutator in the same way, if only to keep those functions
looking similar; but I didn't do so here as it wasn't necessary for
the bug fix.)
This exercise also taught me that find_dependent_phvs(), as it stood,
could only safely be used on the entire Query, not on subtrees.
Adjust its API to reflect that; which in passing allows it to have
a fast path for the common case of no PHVs anywhere.
Per report from Will Leinweber. Back-patch to v12 where the bug
was introduced.
Discussion: https://postgr.es/m/CALLb-4xJMd4GZt2YCecMC95H-PafuWNKcmps4HLRx2NHNBfB4g@mail.gmail.com
This undoes my hurried commit 776a2c887, restoring the removed test case
in a form that passes with or without force_parallel_mode = regress.
It turns out that force_parallel_mode = regress simply fails to mask
the Worker lines that will be produced by EXPLAIN (ANALYZE, VERBOSE).
I'd say that's a bug in that feature, as its entire alleged reason
for existence is to make the EXPLAIN output the same. It's certainly
not a bug in the plan node pruning logic. Fortunately, this test case
doesn't really need to use ANALYZE, so just drop that.
Discussion: https://postgr.es/m/18891.1576109690@sss.pgh.pa.us
The DTK_DOW/DTK_ISODOW and DTK_DOY switch cases in timestamp_part() and
timestamptz_part() contained calls of timestamp2tm() that were fully
redundant with the ones done just above the switch. This evidently crept
in during commit 258ee1b63, which relocated that code from another place
where the calls were indeed needed. Just delete the redundant calls.
I (tgl) noted that our test coverage of these functions left quite a
bit to be desired, so extend timestamp.sql and timestamptz.sql to
cover all the branches.
Back-patch to all supported branches, as the previous commit was.
There's no real issue here other than some wasted cycles in some
not-too-heavily-used code paths, but the test coverage seems valuable.
Report and patch by Li Japin; test case adjustments by me.
Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
The buildfarm says this produces some unexpected output with
force_parallel_mode = regress. There's probably a bug underneath
that, but for the moment just delete the test case to make the
buildfarm green again.
(I now notice that the case had also failed to get updated to follow
commit d52eaa094, which made plan_cache_mode = force_generic_plan
prevail throughout partition_prune.sql; it was thereby managing to
break a later test. When/if we put this back in, *don't* include the
SET and RESET commands.)
Previously, if the startup pruning logic proved that all child nodes
of an Append or MergeAppend could be pruned, we still kept one, just
to keep EXPLAIN from failing. The previous commit removed the
ruleutils.c limitation that required this kluge, so drop it. That
results in less-confusing EXPLAIN output, as per a complaint from
Yuzuko Hosoya.
David Rowley
Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
This patch causes EXPLAIN to always assign a separate table alias to the
parent RTE of an append relation (inheritance set); before, such RTEs
were ignored if not actually scanned by the plan. Since the child RTEs
now always have that same alias to start with (cf. commit 55a1954da),
the net effect is that the parent RTE usually gets the alias used or
implied by the query text, and the children all get that alias with "_N"
appended. (The exception to "usually" is if there are duplicate aliases
in different subtrees of the original query; then some of those original
RTEs will also have "_N" appended.)
This results in more uniform output for partitioned-table plans than
we had before: the partitioned table itself gets the original alias,
and all child tables have aliases with "_N", rather than the previous
behavior where one of the children would get an alias without "_N".
The reason for giving the parent RTE an alias, even if it isn't scanned
by the plan, is that we now use the parent's alias to qualify Vars that
refer to an appendrel output column and appear above the Append or
MergeAppend that computes the appendrel. But below the append, Vars
refer to some one of the child relations, and are displayed that way.
This seems clearer than the old behavior where a Var that could carry
values from any child relation was displayed as if it referred to only
one of them.
While at it, change ruleutils.c so that the code paths used by EXPLAIN
deal in Plan trees not PlanState trees. This effectively reverts a
decision made in commit 1cc29fe7c, which seemed like a good idea at
the time to make ruleutils.c consistent with explain.c. However,
it's problematic because we'd really like to allow executor startup
pruning to remove all the children of an append node when possible,
leaving no child PlanState to resolve Vars against. (That's not done
here, but will be in the next patch.) This requires different handling
of subplans and initplans than before, but is otherwise a pretty
straightforward change.
Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
This provides a mechanism to emit literal values in informative
messages, such as query parameters. The new code is more complex than
what it replaces, primarily because it wants to be more efficient.
It also has the (currently unused) additional optional capability of
specifying a maximum size to print.
The new function lives out of common/stringinfo.c so that frontend users
of that file need not pull in unnecessary multibyte-encoding support
code.
Author: Álvaro Herrera and Alexey Bashtanov, after a suggestion from Andres Freund
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20190920203905.xkv5udsd5dxfs6tr@alap3.anarazel.de
isolationtester.c had a hard-wired limit of 3 minutes per test step.
It now emerges that this isn't quite enough for some of the slowest
buildfarm animals. This isn't the first time we've had to raise
this limit (cf. 1db439ad4), so let's make it configurable. This
patch raises the default to 5 minutes, and introduces an environment
variable PGISOLATIONTIMEOUT that can be set if more time is needed,
following the precedent of PGCTLTIMEOUT.
Also, modify isolationtester so that when the timeout is hit,
it explicitly reports having sent a cancel. This makes the regression
failure log considerably more intelligible. (In the worst case, a
timed-out test might actually be reported as "passing" without this
extra output, so arguably this is a bug fix in itself.)
In passing, update the README file, which had apparently not gotten
touched when we added "make check" support here.
Back-patch to 9.6; older versions don't have comparable timeout logic.
Discussion: https://postgr.es/m/22964.1575842935@sss.pgh.pa.us
While fooling around with the EXPLAIN improvements I've been working
on, I noticed that there were some large gaps in our test coverage
of ruleutils.c, according to the code coverage report. This commit
just adds a few test cases to improve coverage of:
get_name_for_var_field()
get_update_query_targetlist_def()
isSimpleNode()
get_sublink_expr()
When creating a uniqueness constraint using a pre-existing index,
we have always required that the index have the same properties you'd
get if you just let a new index get built. However, when collations
were added, we forgot to add the index's collation to that check.
It's hard to trip over this without intentionally trying to break it:
you'd have to explicitly specify a different collation in CREATE
INDEX, then convert it to a pkey or unique constraint. Still, if you
did that, pg_dump would emit a script that fails to reproduce the
index's collation. The main practical problem is that after a
pg_upgrade the index would be corrupt, because its actual physical
order wouldn't match what pg_index says. A more theoretical issue,
which is new as of v12, is that if you create the index with a
nondeterministic collation then it wouldn't be enforcing the normal
notion of uniqueness, causing the constraint to mean something
different from a normally-created constraint.
To fix, just add collation to the conditions checked for index
acceptability in ADD PRIMARY KEY/UNIQUE USING INDEX. We won't try
to clean up after anybody who's already created such a situation;
it seems improbable enough to not be worth the effort involved.
(If you do get into trouble, a REINDEX should be enough to fix it.)
In principle this is a long-standing bug, but I chose not to
back-patch --- the odds of causing trouble seem about as great
as the odds of preventing it, and both risks are very low anyway.
Per report from Alexey Bashtanov, though this is not his preferred
fix.
Discussion: https://postgr.es/m/b05ce36a-cefb-ca5e-b386-a400535b1c0b@imap.cc