This issue is environment-sensitive, where the SSL tests could fail in
various way by feeding on defaults provided by sslcert, sslkey,
sslrootkey, sslrootcert, sslcrl and sslcrldir coming from a local setup,
as of ~/.postgresql/ by default. Horiguchi-san has reported two
failures, but more advanced testing from me (aka inclusion of garbage
SSL configuration in ~/.postgresql/ for all the configuration
parameters) has showed dozens of failures that can be triggered in the
whole test suite.
History has showed that we are not good when it comes to address such
issues, fixing them locally like in dd87799, and such problems keep
appearing. This commit strengthens the entire test suite to put an end
to this set of problems by embedding invalid default values in all the
connection strings used in the tests. The invalid values are prefixed
in each connection string, relying on the follow-up values passed in the
connection string to enforce any invalid value previously set. Note
that two tests related to CRLs are required to fail with certain pre-set
configurations, but we can rely on enforcing an empty value instead
after the invalid set of values.
Reported-by: Kyotaro Horiguchi
Reviewed-by: Andrew Dunstan, Daniel Gustafsson, Kyotaro Horiguchi
Discussion: https://postgr.es/m/20220316.163658.1122740600489097632.horikyota.ntt@gmail.com
backpatch-through: 10
The planner needs to treat GroupingFunc like Aggref for many purposes,
in particular with respect to processing of the argument expressions,
which are not to be evaluated at runtime. A few places hadn't gotten
that memo, notably including subselect.c's processing of outer-level
aggregates. This resulted in assertion failures or wrong plans for
cases in which a GROUPING() construct references an outer aggregation
level.
Also fix missing special cases for GroupingFunc in cost_qual_eval
(resulting in wrong cost estimates for GROUPING(), although it's
not clear that that would affect plan shapes in practice) and in
ruleutils.c (resulting in excess parentheses in pretty-print mode).
Per bug #17088 from Yaoguang Chen. Back-patch to all supported
branches.
Richard Guo, Tom Lane
Discussion: https://postgr.es/m/17088-e33882b387de7f5c@postgresql.org
DROP INDEX needs to lock the index's table before the index itself,
else it will deadlock against ordinary queries that acquire the
relation locks in that order. This is correctly mechanized for
plain indexes by RangeVarCallbackForDropRelation; but in the case of
a partitioned index, we neglected to lock the child tables in advance
of locking the child indexes. We can fix that by traversing the
inheritance tree and acquiring the needed locks in RemoveRelations,
after we have acquired our locks on the parent partitioned table and
index.
While at it, do some refactoring to eliminate confusion between
the actual and expected relkind in RangeVarCallbackForDropRelation.
We can save a couple of syscache lookups too, by having that function
pass back info that RemoveRelations will need.
Back-patch to v11 where partitioned indexes were added.
Jimmy Yih, Gaurab Dey, Tom Lane
Discussion: https://postgr.es/m/BYAPR05MB645402330042E17D91A70C12BD5F9@BYAPR05MB6454.namprd05.prod.outlook.com
The output of table_to_xmlschema() and allied functions includes
a regex describing valid values for these types ... but the regex
was itself invalid, as it failed to escape a literal "+" sign.
Report and fix by Renan Soares Lopes. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/7f6fabaa-3f8f-49ab-89ca-59fbfe633105@me.com
In commit bf7ca1587, I had the bright idea that we could make the
result of a whole-row Var (that is, foo.*) track any column aliases
that had been applied to the FROM entry the Var refers to. However,
that's not terribly logically consistent, because now the output of
the Var is no longer of the named composite type that the Var claims
to emit. bf7ca1587 tried to handle that by changing the output
tuple values to be labeled with a blessed RECORD type, but that's
really pretty disastrous: we can wind up storing such tuples onto
disk, whereupon they're not readable by other sessions.
The only practical fix I can see is to give up on what bf7ca1587
tried to do, and say that the column names of tuples produced by
a whole-row Var are always those of the underlying named composite
type, query aliases or no. While this introduces some inconsistencies,
it removes others, so it's not that awful in the abstract. What *is*
kind of awful is to make such a behavioral change in a back-patched
bug fix. But corrupt data is worse, so back-patched it will be.
(A workaround available to anyone who's unhappy about this is to
introduce an extra level of sub-SELECT, so that the whole-row Var is
referring to the sub-SELECT's output and not to a named table type.
Then the Var is of type RECORD to begin with and there's no issue.)
Per report from Miles Delahunty. The faulty commit dates to 9.5,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/2950001.1638729947@sss.pgh.pa.us
Slow hosts may avoid load-induced, spurious failures by setting
environment variable PG_TEST_TIMEOUT_DEFAULT to some number of seconds
greater than 180. Developers may see faster failures by setting that
environment variable to some lesser number of seconds. In tests, write
$PostgreSQL::Test::Utils::timeout_default wherever the convention has
been to write 180. This change raises the default for some briefer
timeouts. Back-patch to v10 (all supported versions).
Discussion: https://postgr.es/m/20220218052842.GA3627003@rfd.leadboat.com
When cleaning up temporary objects during process exit the cleanup could fail
with:
FATAL: cannot fetch toast data without an active snapshot
The bug is caused by RemoveTempRelationsCallback() not setting up a
snapshot. If an object with toasted catalog data needs to be cleaned up,
init_toast_snapshot() could fail with the above error.
Most of the time however the the problem is masked due to cached catalog
snapshots being returned by GetOldestSnapshot(). But dropping an object can
cause catalog invalidations to be emitted. If no further catalog accesses are
necessary between the invalidation processing and the next toast datum
deletion, the bug becomes visible.
It's easy to miss this bug because it typically happens after clients
disconnect and the FATAL error just ends up in the log.
Luckily temporary table cleanup at the next use of the same temporary schema
or during DISCARD ALL does not have the same problem.
Fix the bug by pushing a snapshot in RemoveTempRelationsCallback(). Also add
isolation tests for temporary object cleanup, including objects with toasted
catalog data.
A future HEAD only commit will add more assertions.
Reported-By: Miles Delahunty
Author: Andres Freund
Discussion: https://postgr.es/m/CAOFAq3BU5Mf2TTvu8D9n_ZOoFAeQswuzk7yziAb7xuw_qyw5gw@mail.gmail.com
Backpatch: 10-
Following migration of Windows buildfarm members running TAP tests to
use of ucrt64 perl for those tests, special processing for msys perl is
no longer necessary and so is removed.
Backpatch to release 10
Discussion: https://postgr.es/m/c65a8781-77ac-ea95-d185-6db291e1baeb@dunslane.net
createplan.c tries to save a runtime projection step by specifying
a scan plan node's output as being exactly the table's columns, or
index's columns in the case of an index-only scan, if there is not a
reason to do otherwise. This logic did not previously pay attention
to whether an index's columns are returnable. That worked, sort of
accidentally, until commit 9a3ddeb51 taught setrefs.c to reject plans
that try to read a non-returnable column. I have no desire to loosen
setrefs.c's new check, so instead adjust use_physical_tlist() to not
try to optimize this way when there are non-returnable column(s).
Per report from Ryan Kelly. Like the previous patch, back-patch
to all supported branches.
Discussion: https://postgr.es/m/CAHUie24ddN+pDNw7fkhNrjrwAX=fXXfGZZEHhRuofV_N_ftaSg@mail.gmail.com
Buildfarm members kittiwake, tadarida and snapper began to fail
frequently when commits 3cd9c3b921977272e6650a5efbeade4203c4bca2 and
f47ed79cc8a0cfa154dc7f01faaf59822552363f added tests of concurrency, but
the problem was reachable before those commits. Back-patch to v10 (all
supported versions).
Discussion: https://postgr.es/m/20220116210241.GC756210@rfd.leadboat.com
Previously, unless we had to add a NOT NULL constraint to the column,
this command resulted in updating only the index's relcache entry.
That's problematic when replication behavior is being driven off the
existence of a primary key: other sessions (and ours too for that
matter) failed to recalculate their opinion of whether the table can
be replicated. Add a relcache invalidation to fix it.
This has been broken since pg_class.relhaspkey was removed in v11.
Before that, updating the table's relhaspkey value sufficed to cause
a cache flush. Hence, backpatch to v11.
Report and patch by Hou Zhijie
Discussion: https://postgr.es/m/OS0PR01MB5716EBE01F112C62F8F9B786947B9@OS0PR01MB5716.jpnprd01.prod.outlook.com
Commits 6c4a8903b et al. had a couple of deficiencies:
* The logic I added to Cluster::start to see if a PID file is present
could be fooled by a stale PID file left over from a previous
postmaster. To fix, if we're not sure whether we expect to find a
running postmaster or not, validate the PID using "kill 0".
* 017_shm.pl has a loop in which it just issues repeated Cluster::start
calls; this will fail if some invocation fails but leaves self->_pid
set. Per buildfarm results, the above fix is not enough to make this
safe: we might have "validated" a PID for a postmaster that exits
immediately after we look. Hence, match each failed start call with
a stop call that will get us back to the self->_pid == undef state.
Add a fail_ok option to Cluster::stop to make this work.
Discussion: https://postgr.es/m/CA+hUKGKV6fOHvfiPt8=dOKzvswjAyLoFoJF1iQXMNpi7+hD1JQ@mail.gmail.com
Since the test requires reproducible behavior from VACUUM, and since
DISABLE_PAGE_SKIPPING doesn't actually disable all forms of page
skipping, let's use a temporary table to avoid contention.
Back-patch to 12, like commit 3414099c.
Discussion: https://postgr.es/m/20220120052404.sonrhq3f3qgplpzj%40alap3.anarazel.de
"pg_ctl start" might start a new postmaster and then return failure
anyway, for example if PGCTLTIMEOUT is exceeded. If there is a
postmaster there, it's still incumbent on us to shut it down at
script end, so check for the PID file even though we are about
to fail.
This has been broken all along, so back-patch to all supported branches.
Discussion: https://postgr.es/m/647439.1642622744@sss.pgh.pa.us
Where we test vacuum_truncate's effects, sometimes this is failing to
truncate as expected on the build farm. That could be explained by page
skipping, so disable it explicitly, with the theory that commit fe246d1c
didn't go far enough.
Back-patch to 12, where the vacuum_truncate tests were added.
Discussion: https://postgr.es/m/CA%2BhUKGLT2UL5_JhmBzUgkdyKfc%3D5J-gJSQJLysMs4rqLUKLAzw%40mail.gmail.com
Commit 859b3003de disabled building of extended stats for inheritance
trees, to prevent updating the same catalog row twice. While that
resolved the issue, it also means there are no extended stats for
declaratively partitioned tables, because there are no data in the
non-leaf relations.
That also means declaratively partitioned tables were not affected by
the issue 859b3003de addressed, which means this is a regression
affecting queries that calculate estimates for the whole inheritance
tree as a whole (which includes e.g. GROUP BY queries).
But because partitioned tables are empty, we can invert the condition
and build statistics only for the case with inheritance, without losing
anything. And we can consider them when calculating estimates.
It may be necessary to run ANALYZE on partitioned tables, to collect
proper statistics. For declarative partitioning there should no prior
statistics, and it might take time before autoanalyze is triggered. For
tables partitioned by inheritance the statistics may include data from
child relations (if built 859b3003de), contradicting the current code.
Report and patch by Justin Pryzby, minor fixes and cleanup by me.
Backpatch all the way back to PostgreSQL 10, where extended statistics
were introduced (same as 859b3003de).
Author: Justin Pryzby
Reported-by: Justin Pryzby
Backpatch-through: 10
Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
Since commit 859b3003de we only build extended statistics for individual
relations, ignoring the child relations. This resolved the issue with
updating catalog tuple twice, but we still tried to use the statistics
when calculating estimates for the whole inheritance tree. When the
relations contain very distinct data, it may produce bogus estimates.
This is roughly the same issue 427c6b5b9 addressed ~15 years ago, and we
fix it the same way - by ignoring extended statistics when calculating
estimates for the inheritance tree as a whole. We still consider
extended statistics when calculating estimates for individual child
relations, of course.
This may result in plan changes due to different estimates, but if the
old statistics were not describing the inheritance tree particularly
well it's quite likely the new plans is actually better.
Report and patch by Justin Pryzby, minor fixes and cleanup by me.
Backpatch all the way back to PostgreSQL 10, where extended statistics
were introduced (same as 859b3003de).
Author: Justin Pryzby
Reported-by: Justin Pryzby
Backpatch-through: 10
Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
Commit 7745bc352 intended to ensure that whole-row Vars would be
printed with "::type" decoration in all contexts where plain
"var.*" notation would result in star-expansion, notably in
ROW() and VALUES() constructs. However, it missed the case of
INSERT with a single-row VALUES, as reported by Timur Khanjanov.
Nosing around ruleutils.c, I found a second oversight: the
code for RowCompareExpr generates ROW() notation without benefit
of an actual RowExpr, and naturally it wasn't in sync :-(.
(The code for FieldStore also does this, but we don't expect that
to generate strictly parsable SQL anyway, so I left it alone.)
Back-patch to all supported branches.
Discussion: https://postgr.es/m/efaba6f9-4190-56be-8ff2-7a1674f9194f@intrans.baku.az
We disallow altering a column datatype within a regular table,
if the table's rowtype is used as a column type elsewhere,
because we lack code to go around and rewrite the other tables.
This restriction should apply to partitioned tables as well, but it
was not checked because ATRewriteTables and ATPrepAlterColumnType
were not on the same page about who should do it for which relkinds.
Per bug #17351 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17351-6db1870f3f4f612a@postgresql.org
Commit 4ace45677 failed to fix the problem fully, because the
same issue of attempting to fetch a non-returnable index column
can occur when rechecking the indexqual after using a lossy index
operator. Moreover, it broke EXPLAIN for such indexquals (which
indicates a gap in our test cases :-().
Revert the code changes of 4ace45677 in favor of adding a new field
to struct IndexOnlyScan, containing a version of the indexqual that
can be executed against the index-returned tuple without using any
non-returnable columns. (The restrictions imposed by check_index_only
guarantee this is possible, although we may have to recompute indexed
expressions.) Support construction of that during setrefs.c
processing by marking IndexOnlyScan.indextlist entries as resjunk
if they can't be returned, rather than removing them entirely.
(We could alternatively require setrefs.c to look up the IndexOptInfo
again, but abusing resjunk this way seems like a reasonably safe way
to avoid needing to do that.)
This solution isn't great from an API-stability standpoint: if there
are any extensions out there that build IndexOnlyScan structs directly,
they'll be broken in the next minor releases. However, only a very
invasive extension would be likely to do such a thing. There's no
change in the Path representation, so typical planner extensions
shouldn't have a problem.
As before, back-patch to all supported branches.
Discussion: https://postgr.es/m/3179992.1641150853@sss.pgh.pa.us
Discussion: https://postgr.es/m/17350-b5bdcf476e5badbb@postgresql.org
If an index has both returnable and non-returnable columns, and one of
the non-returnable columns is an expression using a Var that is in a
returnable column, then a query returning that expression could result
in an index-only scan plan that attempts to read the non-returnable
column, instead of recomputing the expression from the returnable
column as intended.
To fix, redefine the "indextlist" list of an IndexOnlyScan plan node
as containing null Consts in place of any non-returnable columns.
This solves the problem by preventing setrefs.c from falsely matching
to such entries. The executor is happy since it only cares about the
exposed types of the entries, and ruleutils.c doesn't care because a
correct plan won't reference those entries. I considered some other
ways to prevent setrefs.c from doing the wrong thing, but this way
seems good since (a) it allows a very localized fix, (b) it makes
the indextlist structure more compact in many cases, and (c) the
indextlist is now a more faithful representation of what the index AM
will actually produce, viz. nulls for any non-returnable columns.
This is easier to hit since we introduced included columns, but it's
possible to construct failing examples without that, as per the
added regression test. Hence, back-patch to all supported branches.
Per bug #17350 from Louis Jachiet.
Discussion: https://postgr.es/m/17350-b5bdcf476e5badbb@postgresql.org
Fix the code changed by commit 5c056b0c2 so that we always generate
RelabelType, not something else, for a cast to unspecified typmod.
Otherwise planner optimizations might not happen.
It appears we missed this point because the previous experiments were
done on type numeric: the parser undesirably generates a call on the
numeric() length-coercion function, but then numeric_support()
optimizes that down to a RelabelType, so that everything seems fine.
It misbehaves for types that have a non-optimized length coercion
function, such as bpchar.
Per report from John Naylor. Back-patch to all supported branches,
as the previous patch eventually was. Unfortunately, that no longer
includes 9.6 ... we really shouldn't put this type of change into a
nearly-EOL branch.
Discussion: https://postgr.es/m/CAFBsxsEfbFHEkouc+FSj+3K1sHipLPbEC67L0SAe-9-da8QtYg@mail.gmail.com
When writing / debugging an isolation test it sometimes is useful to see which
session holds what lock etc. To make it easier, both as part of spec files and
interactively, append the session name to application_name. Since b1907d688
application_name already contains the test name, this appends the session's
name to that.
insert-conflict-specconflict did something like this manually, which can now
be removed.
As we have done lately with other test infrastructure improvements, backpatch
this change, to make it easier to backpatch tests.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Michael Paquier <michael@paquier.xyz>
Reviewed-By: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/20211211012052.2blmzcmxnxqawd2z@alap3.anarazel.de
Backpatch: 10-, to make backpatching of tests easier.
We started to backpatch test infrastructure improvements more aggressively to
make it easier to backpatch test. A proposed isolationtester improvement has a
dependency on b1907d688, backpatch b1907d688 to make it easier to subsequently
backpatch the new proposed isolationtester change.
Discussion: https://postgr.es/m/861977.1639421872@sss.pgh.pa.us
Backpatch: 10-12, the commit already is in 13-HEAD
Replica identities that depend directly on an index rely on a set of
properties, one of them being that all the columns defined in this index
have to be marked as NOT NULL. There was a hole in the logic with ALTER
TABLE DROP NOT NULL, where it was possible to remove the NOT NULL
property of a column part of an index used as replica identity, so block
it to avoid problems with logical decoding down the road.
The same check was already done columns part of a primary key, so the
fix is straight-forward.
Author: Haiying Tang, Hou Zhijie
Reviewed-by: Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/OS0PR01MB6113338C102BEE8B2FFC5BD9FB619@OS0PR01MB6113.jpnprd01.prod.outlook.com
Backpatch-through: 10
Per buildfarm member prion, that runs the regression tests under a role
name that uses a hyphen. Issue introduced by 835bcba.
Discussion: https://postgr.es/m/YZW4MvzCZ+hQ34vw@paquier.xyz
Backpatch-through: 12
This commit adds to the main regression test suite a table with all
the in-core data types (some exceptions apply). This table is not
dropped, so as pg_upgrade would be able to check the binary
compatibility of the types tracked in the table. If a new type is added
in core, this part of the tests would need a refresh but the tests are
designed to fail if that were to happen.
As this is useful for upgrades and that these rely on the objects
created in the regression test suite of the old version upgraded from,
a backpatch down to 12 is done, which is the last point where a binary
incompatible change has been done (7c15cef). This will hopefully be
enough to find out if something gets broken during the development of a
new version of Postgres, so as it is possible to take actions in
pg_upgrade itself in this case (like 0ccfc28 for sql_identifier).
An area that is not covered yet is related to external modules, which
may create their own types. The testing infrastructure of pg_upgrade is
not integrated yet with the external modules stored in core
(src/test/modules/ or contrib/, all use the same database name for their
tests so there would be an overlap). This could be improved in the
future.
Author: Justin Pryzby
Reviewed-by: Jacob Champion, Peter Eisentraut, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20201206180248.GI24052@telsasoft.com
Backpatch-through: 12
When changing REPLICA IDENTITY INDEX to another one, the target table's
relcache was not being invalidated. This leads to skipping update/delete
operations during apply on the subscriber side as the columns required to
search corresponding rows won't get logged.
Author: Tang Haiying, Hou Zhijie
Reviewed-by: Euler Taveira, Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/OS0PR01MB61133CA11630DAE45BC6AD95FB939@OS0PR01MB6113.jpnprd01.prod.outlook.com
We've seen intermittent failures in this test on slower buildfarm
machines, which I think can be explained by assuming that autovacuum
emitted some additional WAL. Disable autovacuum to stabilize it.
In passing, use stringwise not numeric comparison to compare
WAL file names. Doesn't matter at present, but they are
hex strings not decimal ...
Discussion: https://postgr.es/m/1372189.1636499287@sss.pgh.pa.us
If lo_export() fails to open the target file or to write to it, it leaks
the created LargeObjectDesc and its snapshot in the top-transaction
context and resource owner. That's pretty harmless, it's a small leak
after all, but it gives the user a "Snapshot reference leak" warning.
Fix by using a short-lived memory context and no resource owner for
transient LargeObjectDescs that are opened and closed within one function
call. The leak is easiest to reproduce with lo_export() on a directory
that doesn't exist, but in principle the other lo_* functions could also
fail.
Backpatch to all supported versions.
Reported-by: Andrew B
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/32bf767a-2d65-71c4-f170-122f416bab7e@iki.fi
Commits fdd965d07 and 3cd9c3b92 tested CREATE INDEX CONCURRENTLY by
launching two separate pgbench runs concurrently. This was needed so
that only a single client thread would run CREATE INDEX CONCURRENTLY,
avoiding deadlock between two CICs. However, there's a better way,
which is to use an advisory lock to prevent concurrent CICs. That's
better in part because the test code is shorter and more readable, but
mostly because it automatically scales things to launch an appropriate
number of CICs relative to the number of INSERT transactions.
As committed, typically half to three-quarters of the CIC transactions
were pointless because the INSERT transactions had already stopped.
In passing, remove background_pgbench, which was added to support
these tests and isn't needed anymore. We can always put it back
if we find a use for it later.
Back-patch to v12; older pgbench versions lack the
conditional-execution features needed for this method.
Tom Lane and Andrey Borodin
Discussion: https://postgr.es/m/139687.1635277318@sss.pgh.pa.us
The purpose of commit 8a54e12a38d1545d249f1402f66c8cde2837d97c was to
fix this, and it sufficed when the PREPARE TRANSACTION completed before
the CIC looked for lock conflicts. Otherwise, things still broke. As
before, in a cluster having used CIC while having enabled prepared
transactions, queries that use the resulting index can silently fail to
find rows. It may be necessary to reindex to recover from past
occurrences; REINDEX CONCURRENTLY suffices. Fix this for future index
builds by making CIC wait for arbitrarily-recent prepared transactions
and for ordinary transactions that may yet PREPARE TRANSACTION. As part
of that, have PREPARE TRANSACTION transfer locks to its dummy PGPROC
before it calls ProcArrayClearTransaction(). Back-patch to 9.6 (all
supported versions).
Andrey Borodin, reviewed (in earlier versions) by Andres Freund.
Discussion: https://postgr.es/m/01824242-AA92-4FE9-9BA7-AEBAFFEA3D0C@yandex-team.ru
CIC and REINDEX CONCURRENTLY assume backends see their catalog changes
no later than each backend's next transaction start. That failed to
hold when a backend absorbed a relevant invalidation in the middle of
running RelationBuildDesc() on the CIC index. Queries that use the
resulting index can silently fail to find rows. Fix this for future
index builds by making RelationBuildDesc() loop until it finishes
without accepting a relevant invalidation. It may be necessary to
reindex to recover from past occurrences; REINDEX CONCURRENTLY suffices.
Back-patch to 9.6 (all supported versions).
Noah Misch and Andrey Borodin, reviewed (in earlier versions) by Andres
Freund.
Discussion: https://postgr.es/m/20210730022548.GA1940096@gust.leadboat.com
Commit 1b5d797cd4f7 intended to relax the lock level used to rename
indexes, but inadvertently allowed *any* relation to be renamed with a
lowered lock level, as long as the command is spelled ALTER INDEX.
That's undesirable for other relation types, so retry the operation with
the higher lock if the relation turns out not to be an index.
After this fix, ALTER INDEX <sometable> RENAME will require access
exclusive lock, which it didn't before.
Author: Nathan Bossart <bossartn@amazon.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Onder Kalaci <onderk@microsoft.com>
Discussion: https://postgr.es/m/PH0PR21MB1328189E2821CDEC646F8178D8AE9@PH0PR21MB1328.namprd21.prod.outlook.com
ldapsearch's deprecated -h/-p arguments were removed, need to use -H now -
which has been around for over 20 years.
As perltidy insists on reflowing the parameters anyway, change order and
"phrasing" to yield a less confusing layout (per suggestion from Tom Lane).
Discussion: https://postgr.es/m/20211009233850.wvr6apcrw2ai6cnj@alap3.anarazel.de
Backpatch: 11-, where the tests were added.
An update such as "UPDATE ... SET fld[n].subfld = whatever"
failed if the array elements were domains rather than plain
composites. That's because isAssignmentIndirectionExpr()
failed to cope with the CoerceToDomain node that would appear
in the expression tree in this case. The result would typically
be a crash, and even if we accidentally didn't crash, we'd not
correctly preserve other fields of the same array element.
Per report from Onder Kalaci. Back-patch to v11 where arrays of
domains came in.
Discussion: https://postgr.es/m/PH0PR21MB132823A46AA36F0685B7A29AD8BD9@PH0PR21MB1328.namprd21.prod.outlook.com
The test code added with ff9f111bce24 fails under valgrind, and probably
other slow cases too, because if (say) autovacuum runs in between and
produces WAL of its own, the large INSERT fails to account for that in
the LSN calculations. Rewrite to use a DO loop.
Per complaint from Andres Freund
Backpatch to all branches.
Discussion: https://postgr.es/m/20211013180338.5guyqzpkcisqugrl@alap3.anarazel.de
Incrementing the level of the call stack reported is useful for
debugging purposes as it allows to control which part of the test is
exactly failing, especially if a test is structured with subroutines
that call routines from Test::More.
This adds more incrementations of $Test::Builder::Level where debugging
gets improved (for example it does not make sense for some paths like
pg_rewind where long subroutines are used).
A note is added to src/test/perl/README about that, based on a
suggestion from Andrew Dunstan and a wording coming from both of us.
Usage of Test::Builder::Level has spread in 12, so a backpatch down to
this version is done.
Reviewed-by: Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson
Discussion: https://postgr.es/m/YV1CCFwgM1RV1LeS@paquier.xyz
Backpatch-through: 12
This fixes a loss of precision that occurs when the first input is
very close to 1, so that its logarithm is very small.
Formerly, during the initial low-precision calculation to estimate the
result weight, the logarithm was computed to a local rscale that was
capped to NUMERIC_MAX_DISPLAY_SCALE (1000). However, the base may be
as close as 1e-16383 to 1, hence its logarithm may be as small as
1e-16383, and so the local rscale needs to be allowed to exceed 16383,
otherwise all precision is lost, leading to a poor choice of rscale
for the full-precision calculation.
Fix this by removing the cap on the local rscale during the initial
low-precision calculation, as we already do in the full-precision
calculation. This doesn't change the fact that the initial calculation
is a low-precision approximation, computing the logarithm to around 8
significant digits, which is very fast, especially when the base is
very close to 1.
Patch by me, reviewed by Alvaro Herrera.
Discussion: https://postgr.es/m/CAEZATCV-Ceu%2BHpRMf416yUe4KKFv%3DtdgXQAe5-7S9tD%3D5E-T1g%40mail.gmail.com
3c5b0685b921 used setFilePointer() to set the position of the filehandle, but
passed the wrong filehandle, always leaving the position at 0. Instead of just
fixing that, remove use of setFilePointer(), we have a perl fd at this point,
so we can just use perl's seek().
Additionally, the perl filehandle wasn't closed, just the windows filehandle.
Reviewed-By: Andrew Dunstan <andrew@dunslane.net>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20211003173038.64mmhgxctfqn7wl6@alap3.anarazel.de
Backpatch: 9.6-, like 3c5b0685b921
Computing related timestamps by subtracting "N days" is sensitive
to the prevailing timezone, since we interpret that as "same local
time on the N'th prior day". Even though the intervals in question
are only two to four days, through remarkable bad luck they managed
to cross the end of Ramadan in 2014, causing the test's output to
change if timezone is set to Africa/Casablanca. (Maybe in other
Muslim areas as well; I didn't check.) There's absolutely no reason
for this test to exercise interval subtraction, so just get rid of
that and use plain timestamptz constants representing the intended
values.
Per report from Andres Freund. Back-patch to v10 where this test
script came in.
Discussion: https://postgr.es/m/20210930183641.7lh4jhvpipvromca@alap3.anarazel.de
Physical replication always ships WAL segment files to replicas once
they are complete. This is a problem if one WAL record is split across
a segment boundary and the primary server crashes before writing down
the segment with the next portion of the WAL record: WAL writing after
crash recovery would happily resume at the point where the broken record
started, overwriting that record ... but any standby or backup may have
already received a copy of that segment, and they are not rewinding.
This causes standbys to stop following the primary after the latter
crashes:
LOG: invalid contrecord length 7262 at A8/D9FFFBC8
because the standby is still trying to read the continuation record
(contrecord) for the original long WAL record, but it is not there and
it will never be. A workaround is to stop the replica, delete the WAL
file, and restart it -- at which point a fresh copy is brought over from
the primary. But that's pretty labor intensive, and I bet many users
would just give up and re-clone the standby instead.
A fix for this problem was already attempted in commit 515e3d84a0b5, but
it only addressed the case for the scenario of WAL archiving, so
streaming replication would still be a problem (as well as other things
such as taking a filesystem-level backup while the server is down after
having crashed), and it had performance scalability problems too; so it
had to be reverted.
This commit fixes the problem using an approach suggested by Andres
Freund, whereby the initial portion(s) of the split-up WAL record are
kept, and a special type of WAL record is written where the contrecord
was lost, so that WAL replay in the replica knows to skip the broken
parts. With this approach, we can continue to stream/archive segment
files as soon as they are complete, and replay of the broken records
will proceed across the crash point without a hitch.
Because a new type of WAL record is added, users should be careful to
upgrade standbys first, primaries later. Otherwise they risk the standby
being unable to start if the primary happens to write such a record.
A new TAP test that exercises this is added, but the portability of it
is yet to be seen.
This has been wrong since the introduction of physical replication, so
backpatch all the way back. In stable branches, keep the new
XLogReaderState members at the end of the struct, to avoid an ABI
break.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Nathan Bossart <bossartn@amazon.com>
Discussion: https://postgr.es/m/202108232252.dh7uxf6oxwcy@alvherre.pgsql
Contrary to the output of native perl, Msys perl generates outputs with
CRLFs characters. There are already places in the TAP code where CRLFs
(\r\n) are automatically converted to LF (\n) on Msys, but we missed a
couple of places when running commands and using their output for
comparison, that would lead to failures.
This problem has been found thanks to the test added in 5adb067 using
TestLib::command_checks_all(), but after a closer look more code paths
were missing a filter.
This is backpatched all the way down to prevent any surprises if a new
test is introduced in stable branches.
Reviewed-by: Andrew Dunstan, Álvaro Herrera
Discussion: https://postgr.es/m/1252480.1631829409@sss.pgh.pa.us
Backpatch-through: 9.6