Previously it was impossible to terminate these programs via control-C
while they were prompting for a password. We can fix that trivially
for their initial password prompts, by moving setup of the SIGINT
handler from just before to just after their initial GetConnection()
calls.
This fix doesn't permit escaping out of later re-prompts, but those
should be exceedingly rare, since the user's password or the server's
authentication setup would have to have changed meanwhile. We
considered applying a fix similar to commit 46d665bc2, but that
seemed more complicated than it'd be worth. Moreover, this way is
back-patchable, which that wasn't.
The misbehavior exists in all supported versions, so back-patch to all.
Tom Lane and Nathan Bossart
Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
The error handling here was a mess, as a result of a fundamentally
bad design (relying on errno to keep its value much longer than is
safe to assume) as well as a lot of just plain sloppiness, both as
to noticing errors at all and as to reporting the correct errno.
Moreover, the recent addition of LZ4 compression broke things
completely, because liblz4 doesn't use errno to report errors.
To improve matters, keep the error state in the DirectoryMethodData or
TarMethodData struct, and add a string field so we can handle cases
that don't set errno. (The tar methods already had a version of this,
but it can be done more efficiently since all these cases use a
constant error string.) Make the dir and tar methods handle errors
in basically identical ways, which they didn't before.
This requires copying errno into the state struct in a lot of places,
which is a bit tedious, but it has the virtue that we can get rid of
ad-hoc code to save and restore errno in a number of places ... not
to mention that it fixes other places that should've saved/restored
errno but neglected to.
In passing, fix some pointlessly static buffers to be ordinary
local variables.
There remains an issue about exactly how to handle errors from
fsync(), but that seems like material for its own patch.
While the LZ4 problems are new, all the rest of this is fixes for
old bugs, so backpatch to v10 where walmethods.c was introduced.
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/1343113.1636489231@sss.pgh.pa.us
Coverity complained that applying get_gz_error after a failed gzclose,
as we did in one place in pg_basebackup, is unsafe. I think it's
right: it's entirely likely that the call is touching freed memory.
Change that to inspect errno, as we do for other gzclose calls.
Also, be careful to initialize errno to zero immediately before any
gzclose() call where we care about the error status. (There are
some calls where we don't, because we already failed at some previous
step.) This ensures that we don't get a misleadingly irrelevant
error code if gzclose() fails in a way that doesn't set errno.
We could work harder at that, but it looks to me like all such cases
are basically can't-happen if we're not misusing zlib, so it's
not worth the extra notational cruft that would be required.
Also, fix several places that simply failed to check for close-time
errors at all, mostly at some remove from the close or gzclose itself;
and one place that did check but didn't bother to report the errno.
Back-patch to v12. These mistakes are older than that, but between
the frontend logging API changes that happened in v12 and the fact
that frontend code can't rely on %m before that, the patch would need
substantial revision to work in older branches. It doesn't quite
seem worth the trouble given the lack of related field complaints.
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/1343113.1636489231@sss.pgh.pa.us
The documentation says plainly that \password acts on "the current user"
by default. What it actually acted on, or tried to, was the username
used to log into the current session. This is not the same thing if
one has since done SET ROLE or SET SESSION AUTHENTICATION. Aside from
the possible surprise factor, it's quite likely that the current role
doesn't have permissions to set the password of the original role.
To fix, use "SELECT CURRENT_USER" to get the role name to act on.
(This syntax works with servers at least back to 7.0.) Also, in
hopes of reducing confusion, include the role name that will be
acted on in the password prompt.
The discrepancy from the documentation makes this a bug, so
back-patch to all supported branches.
Patch by me; thanks to Nathan Bossart for review.
Discussion: https://postgr.es/m/747443.1635536754@sss.pgh.pa.us
Non-global default privilege entries should be dumped as-is,
not made relative to the default ACL for their object type.
This would typically only matter if one had revoked some
on-by-default privileges in a global entry, and then wanted
to grant them again in a non-global entry.
Per report from Boris Korzun. This is an old bug, so back-patch
to all supported branches.
Neil Chen, test case by Masahiko Sawada
Discussion: https://postgr.es/m/111621616618184@mail.yandex.ru
Discussion: https://postgr.es/m/CAA3qoJnr2+1dVJObNtfec=qW4Z0nz=A9+r5bZKoTSy5RDjskMw@mail.gmail.com
If the blob TOC file cannot be parsed, the error message was failing
to print the filename as the variable holding it was shadowed by the
destination buffer for parsing. When the filename fails to parse,
the error will print an empty string:
./pg_restore -d foo -F d dump
pg_restore: error: invalid line in large object TOC file "": ..
..instead of the intended error message:
./pg_restore -d foo -F d dump
pg_restore: error: invalid line in large object TOC file "dump/blobs.toc": ..
Fix by renaming both variables as the shared name was too generic to
store either and still convey what the variable held.
Backpatch all the way down to 9.6.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/A2B151F5-B32B-4F2C-BA4A-6870856D9BDE@yesql.se
Backpatch-through: 9.6
Make sure that the string parsing is limited by the size of the
destination buffer.
In pg_basebackup the available values sent from the server
is limited to two characters so there was no risk of overflow.
In pg_dump the buffer is bounded by MAXPGPATH, and thus the limit
must be inserted via preprocessor expansion and the buffer increased
by one to account for the terminator. There is no risk of overflow
here, since in this case, the buffer scanned is smaller than the
destination buffer.
Backpatch the pg_basebackup fix to 11 where it was introduced, and
the pg_dump fix all the way down to 9.6.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/B14D3D7B-F98C-4E20-9459-C122C67647FB@yesql.se
Backpatch-through: 11 and 9.6
It was clearly the intent to do so all along, but the original coding
fat-fingered this by checking the wrong array element. We fixed it
in passing in 403a3d91c, but that later got reverted, and we forgot
to keep this bug fix.
Most of the time this'd be relatively harmless, since once we lock
any of the partitioned table's leaf partitions, that would suffice
to prevent major DDL on the partitioned table itself. However, a
childless partitioned table would get dumped with no relevant lock
whatsoever, possibly allowing dump failure or inconsistent output.
Unlike 403a3d91c, there are no versioning concerns, since every server
version that has partitioned tables will allow you to lock one.
Back-patch to v10 where partitioned tables were introduced.
Discussion: https://postgr.es/m/1018205.1634346327@sss.pgh.pa.us
This fixes a set of issues that cause different breakages or annoyances
when using pg_upgrade's test.sh to do upgrades across different major
versions:
- test.sh is completely broken when using v14 as new version because of
the removal of testtablespace/ as Makefile rule. Older versions of
pg_regress don't support --make-tablespacedir, blocking the creation of
the tablespace. In order to fix that, it is simple enough to create
those directories in the script itself, but only do that when an old
version is involved. This fix is needed on HEAD and REL_14_STABLE.
- The script would fail when using PG <= v11 as old version because of
WITH OIDS relations not supported in v12. In order to fix this, this
steals a method from the buildfarm that uses a DO block to change all
the relations marked as WITH OIDS, allowing pg_upgrade to pass. This is
more portable than using ALTER TABLE queries on the relations causing
issues. This is fixed down to v12, and authored originally by Andrew
Dunstan.
- Not using --extra-float-digits=0 with v11 as old version causes
a lot of diffs in the dumps, making the whole unreadable. This gets
only done when using v11 as old version. This is fixed down to v12.
The buildfarm code uses that already.
Note that the addition of --wal-segsize and --allow-group-access breaks
the script when using v10 or older at initdb time as these got added in
11. 10 would be EOL'd next year and nobody has complained about those
problems yet, so nothing is done about that. This means that this
commit fixes upgrade tests using test.sh with v11 as minimum older
version, up to HEAD, and that it is enough to apply this change down to
12. The old and new dumps still generate diffs, still require manual
checks, and more could be done to reduce the noise, but this allows the
tests to run with a rather minimal amount of them.
I have tested this commit and test.sh with v11 as minimum across all the
branches where this is applied. Note that this commit has no impact on
the normal pg_upgrade test run with a simple "make check".
Author: Justin Pryzby, Andrew Dunstan, Michael Paquier
Discussion: https://postgr.es/m/20201206180248.GI24052@telsasoft.com
Backpatch-through: 12
Incrementing the level of the call stack reported is useful for
debugging purposes as it allows to control which part of the test is
exactly failing, especially if a test is structured with subroutines
that call routines from Test::More.
This adds more incrementations of $Test::Builder::Level where debugging
gets improved (for example it does not make sense for some paths like
pg_rewind where long subroutines are used).
A note is added to src/test/perl/README about that, based on a
suggestion from Andrew Dunstan and a wording coming from both of us.
Usage of Test::Builder::Level has spread in 12, so a backpatch down to
this version is done.
Reviewed-by: Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson
Discussion: https://postgr.es/m/YV1CCFwgM1RV1LeS@paquier.xyz
Backpatch-through: 12
Per discussion, let's just follow CLDR's default zone mappings
faithfully. There are two changes here that are clear improvements:
* Mapping "Greenwich Standard Time" to Atlantic/Reykjavik is actually
a better fit than using London, because Iceland hasn't observed DST
since 1968, so this is more nearly what people might expect.
* Since the "Samoa" zone is specified to be UTC+13:00, we must map
it to Pacific/Apia not Pacific/Samoa; the latter refers to American
Samoa which is now on the other side of the date line.
The rest of these changes look like they're choosing the most populous
IANA zone as representative. Whatever the details, we're just going
to say "if you don't like this mapping, complain to CLDR".
Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
This corrects a bunch of entries in win32_tzmap[], and adds a few
new ones, based on the CLDR project's windowsZones.xml file.
Non-cosmetic changes fall into four main categories:
* Flat-out errors:
US/Aleutan doesn't exist
America/Salvador doesn't exist
Asia/Baku is wrong for Yerevan
Asia/Dhaka (Bangladesh) is wrong for Astana (Kazakhstan)
Europe/Bucharest is wrong for Chisinau
America/Mexico_City is wrong for Chetumal
America/Buenos_Aires is wrong for Cayenne
America/Caracas has its own zone, so poor fit for La Paz
US/Eastern is wrong for Haiti
US/Eastern is wrong for Indiana (East)
Asia/Karachi is wrong for Tashkent
Etc/UTC+12 doesn't exist
Signs of Etc/GMT zones were backwards
* Judgment calls:
(These changes follow CLDR's choices, except for the first one)
Use Europe/London for "Greenwich Standard Time", since that seems much
more likely than Africa/Casablanca to be what people will think that
zone name means. CLDR has Atlantic/Reykjavik here, but that's no better.
Asia/Shanghai seems a better fit than Hong Kong for "China Standard
Time".
Europe/Sarajevo is now a link to Belgrade, ie "Central Europe Standard
Time"; so use Warsaw for "Central European Standard Time".
America/Sao_Paulo seems more representative than Araguaina for
"E. South America Standard Time".
Africa/Johannesburg seems more representative than Harare for
"South Africa Standard Time".
* New Windows zone names:
"Israel Standard Time"
"Kaliningrad Standard Time"
"Russia Time Zone N" for various N
"Singapore Standard Time"
"South Sudan Standard Time"
"W. Central Africa Standard Time"
"West Bank Standard Time"
"Yukon Standard Time"
Some of these replace older spellings, but I kept the older spellings
too in case our code runs on a machine with the older data.
* Replace aliases (tzdb Links) with underlying city-named zones:
(This tracks tzdb's longstanding practice, and reduces inconsistency
with the rest of the entries, as well as with CLDR.)
US/Alaska
Asia/Kuwait
Asia/Muscat
Canada/Atlantic
Australia/Canberra
Canada/Saskatchewan
US/Central
US/Eastern
US/Hawaii
US/Mountain
Canada/Newfoundland
US/Pacific
Back-patch to all supported branches, as is our usual practice for
time zone data updates.
Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
The original intent seems to have been to sort case-insensitively
by the Windows zone name, but various changes over the years did
not get that memo. This commit just moves a few entries to
restore exact alphabetic order, to ease comparison to the outputs
of processing scripts.
Back-patch to all supported branches, as is our usual practice for
time zone data updates.
Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
Previously socket errors such as invalid socket or socket wait method failures
during benchmark caused pgbench to exit with status 0. Instead, errors during
the run should result in exit status 2.
Back-patch to v12 where pgbench started reporting exit status.
Original complaint and patch by Hayato Kuroda.
Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870057375ACA8A73099C649F5349@TYCPR01MB5870.jpnprd01.prod.outlook.com
The failure of socket wait method like "select()" doesn't terminate pgbench.
So the log level of error message when that failure happens should be ERROR.
But previously FATAL was used in that case.
Back-patch to v13 where pgbench started using common logging API.
Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/20210617005934.8bd37bf72efd5f1b38e6f482@sraoss.co.jp
That allows to include just visibilitymapdefs.h from file.c, and in turn,
remove include of postgres.h from relcache.h.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20210913232614.czafiubr435l6egi%40alap3.anarazel.de
Author: Alexander Korotkov
Reviewed-by: Andres Freund, Tom Lane, Alvaro Herrera
Backpatch-through: 13
Coverity complained that one caller of getFormattedTypeName() failed
to free the returned string. Which is true, but rather than fixing
that one, let's get rid of this tedious and error-prone requirement.
Now that getFormattedTypeName() caches its result, strdup'ing that
result and expecting the caller to free it accomplishes little except
to waste cycles. We do create a leak in the case where getTypes didn't
make a TypeInfo for the type, but that basically shouldn't ever happen.
Back-patch, as commit 6c450a861 was. This isn't a particularly
interesting bug fix, but the API change seems like a hazard for
future back-patching activity if we don't back-patch it.
Replace fixed-length command buffers with psprintf() calls. We didn't
have anything as convenient as psprintf() when this code was written,
but now that we do, there's little reason for the limitation to
stand. Removing it eliminates some corner cases where (for example)
starting the postmaster with a whole lot of options fails.
Most individual file names that pg_ctl deals with are still restricted
to MAXPGPATH, but we've seldom had complaints about that limitation
so long as it only applies to one filename.
Back-patch to all supported branches.
Phil Krylov
Discussion: https://postgr.es/m/567e199c6b97ee19deee600311515b86@krylov.eu
For no particularly good reason, getPolicies() queried pg_policy
separately for each table. We can collect all the policies in
a single query instead, and attach them to the correct TableInfo
objects using findTableByOid() lookups. On the regression
database, this reduces the number of queries substantially, and
provides a visible savings even when running against a local
server.
Per complaint from Hubert Depesz Lubaczewski. Since this is such
a simple fix and can have a visible performance benefit, back-patch
to all supported branches.
Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
There's long been a "TODO: there might be some value in caching
the results" annotation on pg_dump's getFormattedTypeName function;
but we hadn't gotten around to checking what it was costing us to
repetitively look up type names. It turns out that when dumping the
current regression database, about 10% of the total number of queries
issued are duplicative format_type() queries. However, Hubert Depesz
Lubaczewski reported a not-unusual case where these account for over
half of the queries issued by pg_dump. Individually these queries
aren't expensive, but when network lag is a factor, they add up to a
problem. We can very easily add some caching to getFormattedTypeName
to solve it.
Since this is such a simple fix and can have a visible performance
benefit, back-patch to all supported branches.
Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
Strictly speaking this isn't a bug, but since all references to catalog
objects are schema-qualified, we might as well be consistent. The
omission first appeared in commit 1c5d9270e339, so backpatch to 12.
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
In a backup manifest, WAL-Ranges stores the range of WAL that is
required for the backup to be valid. pg_verifybackup would then
internally use pg_waldump for the checks based on this data.
When the timeline where the backup started was more than 1 with a
history file looked at for the manifest data generation, the calculation
of the WAL range for the first timeline to check was incorrect. The
previous logic used as start LSN the start position of the first
timeline, but it needs to use the start LSN of the backup. This would
cause failures with pg_verifybackup, or any tools making use of the
backup manifests.
This commit adds a test based on a logic using a self-promoted node,
making it rather cheap.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210818.143031.1867083699202617521.horikyota.ntt@gmail.com
Backpatch-through: 13
pg_verifybackup needs by default pg_waldump to check after a range of
WAL segments required for a backup, except if --no-parse-wal is
specified. The code checked for the presence of the binary pg_waldump
in an installation and reported an error, but it forgot to properly
exit(). This could lead to confusing errors reported.
Reviewed-by: Robert Haas, Fabien Coelho
Discussion: https://postgr.es/m/YQDMdB+B68yePFeT@paquier.xyz
Backpatch-through: 13
Add pg_resetxlog -u option to set the oldest xid in pg_control.
Previously -x set this value be -2 billion less than the -x value.
However, this causes the server to immediately scan all relation's
relfrozenxid so it can advance pg_control's oldest xid to be inside the
autovacuum_freeze_max_age range, which is inefficient and might disrupt
diagnostic recovery. pg_upgrade will use this option to better create
the new cluster to match the old cluster.
Reported-by: Jason Harvey, Floris Van Nee
Discussion: https://postgr.es/m/20190615183759.GB239428@rfd.leadboat.com, 87da83168c644fd9aae38f546cc70295@opammb0562.comp.optiver.com
Author: Bertrand Drouvot
Backpatch-through: 9.6
These have been introduced by 7fbe0c8, and could happen for
pg_basebackup and pg_receivewal.
Per report from Coverity for the ones in walmethods.c, I have spotted
the ones in receivelog.c after more review.
Backpatch-through: 10
The logic handling the opening of new WAL segments was fuzzy when using
--compress if a partial, non-compressed, segment with the same base name
existed in the repository storing those files. In this case, using
--compress would cause the code to first check for the existence and the
size of a non-compressed segment, followed by the opening of a new
compressed, partial, segment. The code was accidentally working
correctly on most platforms as the buildfarm has proved, except
bowerbird where gzflush() could fail in this code path. It is wrong
anyway to take the code path used pre-padding when creating a new
partial, non-compressed, segment, so let's fix it.
Note that this issue exists when users mix successive runs of
pg_receivewal with or without compression, as discovered with the tests
introduced by ffc9dda.
While on it, this refactors the code so as code paths that need to know
about the ".gz" suffix are down from four to one in walmethods.c, easing
a bit the introduction of new compression methods. This addresses a
second issue where log messages generated for an unexpected failure
would not show the compressed segment name involved, which was
confusing, printing instead the name of the non-compressed equivalent.
Reported-by: Georgios Kokolatos
Discussion: https://postgr.es/m/YPDLz2x3o1aX2wRh@paquier.xyz
Backpatch-through: 10
pg_dump failed to preserve the 'enabled' flag (which can be not only
disabled, but also REPLICA or ALWAYS) for partitions which had it
changed from their respective parents. Attempt to handle that by
including a definition for such triggers in the dump, but replace the
standard CREATE TRIGGER line with an ALTER TRIGGER line.
Backpatch to 11, where these triggers can exist. In branches 11 and 12,
pick up a few test lines from commit b9b408c48724 to verify that
pg_upgrade is okay with these arrangements.
Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
Instead use the common/int.h functions to check for integer overflow
in a more C-standard-compliant fashion. This is motivated by recent
failures on buildfarm member moonjelly, where it appears that
development-tip gcc is optimizing without regard to the -fwrapv
switch. Presumably that's a gcc bug that will be fixed soon, but
we might as well install cleaner coding here rather than wait.
(This does not address the question of whether we'll ever be able
to get rid of using -fwrapv. Testing shows that this spot is the
only place where doing so creates visible regression test failures,
but unfortunately that proves very little.)
Back-patch to v12. The common/int.h functions exist in v11, but
that branch doesn't use them in any client-side code. I judge
that this case isn't interesting enough in the real world to take
even a small risk of issues from being the first such use.
Tom Lane and Fabien Coelho
Discussion: https://postgr.es/m/73927.1624815543@sss.pgh.pa.us
002_pgbench_no_server was printing some array pointers instead of the
actual contents of those arrays for the expected outputs of stdout and
stderr for a tested command. This does not add any new information that
can help with debugging as the test names allow to track failure
locations, if any.
This commit simply removes those logs as the rest of the printed
information is redundant with command_checks_all().
Per discussion with Andrew Dunstan and Álvaro Herrera.
Discussion: https://postgr.es/m/YNXNFaG7IgkzZanD@paquier.xyz
Backpatch-through: 11
This fixes a couple of problems within the so-said code of this commit
subject:
- Replace the use of open() with slurp_file(), fixing an issue reported
by buildfarm member fairywren whose perl installation keep around CRLF
characters, causing the matching patterns for the logs to fail.
- Remove the eval block, which is not really necessary.
This set of issues has come into light after fixing a different issue
with c13585fe, and this is wrong since this code has been introduced.
Reported-by: Andrew Dunstan, and buildfarm member fairywren
Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/0f49303e-7784-b3ee-200b-cbf67be2eb9e@dunslane.net
Backpatch-through: 11
The logic checking for the format of per-thread logs used grep() with
directly "$re", which would cause the test to consider all the logs as
a match without caring about their format at all. Using "/$re/" makes
grep() perform a regex test, which is what we want here.
While on it, improve some of the tests to be more picky with the
patterns expected and add more comments to describe the tests.
Issue discovered while digging into a separate patch.
Author: Fabien Coelho, Michael Paquier
Discussion: https://postgr.es/m/YNPsPAUoVDCpPOGk@paquier.xyz
Backpatch-through: 11
Recent glibc versions have made mktime() fail if tm_isdst is
inconsistent with the prevailing timezone; in particular it fails for
tm_isdst = 1 when the zone is UTC. (This seems wildly inconsistent
with the POSIX-mandated treatment of "incorrect" values for the other
fields of struct tm, so if you ask me it's a bug, but I bet they'll
say it's intentional.) This has been observed to cause cosmetic
problems when pg_restore'ing an archive created in a different
timezone.
To fix, do mktime() using the field values from the archive, and if
that fails try again with tm_isdst = -1. This will give a result
that's off by the UTC-offset difference from the original zone, but
that was true before, too. It's not terribly critical since we don't
do anything with the result except possibly print it. (Someday we
should flush this entire bit of logic and record a standard-format
timestamp in the archive instead. That's not okay for a back-patched
bug fix, though.)
Also, guard our only other use of mktime() by having initdb's
build_time_t() set tm_isdst = -1 not 0. This case could only have
an issue in zones that are DST year-round; but I think some do exist,
or could in future.
Per report from Wells Oliver. Back-patch to all supported
versions, since any of them might need to run with a newer glibc.
Discussion: https://postgr.es/m/CAOC+FBWDhDHO7G-i1_n_hjRzCnUeFO+H-Czi1y10mFhRWpBrew@mail.gmail.com
The set of subcommands supported by \dAp, \do and \dy was described
incorrectly in psql's --help. The documentation was already consistent
with the code.
Reported-by: inoas, from IRC
Author: Matthijs van der Vleuten
Reviewed-by: Neil Chen
Discussion: https://postgr.es/m/6a984e24-2171-4039-9050-92d55e7b23fe@www.fastmail.com
Backpatch-through: 9.6
An incorrectly-encoded multibyte character near the end of a string
could cause various processing loops to run past the string's
terminating NUL, with results ranging from no detectable issue to
a program crash, depending on what happens to be in the following
memory.
This isn't an issue in the server, because we take care to verify
the encoding of strings before doing any interesting processing
on them. However, that lack of care leaked into client-side code
which shouldn't assume that anyone has validated the encoding of
its input.
Although this is certainly a bug worth fixing, the PG security team
elected not to regard it as a security issue, primarily because
any untrusted text should be sanitized by PQescapeLiteral or
the like before being incorporated into a SQL or psql command.
(If an app fails to do so, the same technique can be used to
cause SQL injection, with probably much more dire consequences
than a mere client-program crash.) Those functions were already
made proof against this class of problem, cf CVE-2006-2313.
To fix, invent PQmblenBounded() which is like PQmblen() except it
won't return more than the number of bytes remaining in the string.
In HEAD we can make this a new libpq function, as PQmblen() is.
It seems imprudent to change libpq's API in stable branches though,
so in the back branches define PQmblenBounded as a macro in the files
that need it. (Note that just changing PQmblen's behavior would not
be a good idea; notably, it would completely break the escaping
functions' defense against this exact problem. So we just want a
version for those callers that don't have any better way of handling
this issue.)
Per private report from houjingyi. Back-patch to all supported branches.
The error messages, docs, and one of the options were using
'parallel degree' to indicate parallelism used by vacuum command. We
normally use 'parallel workers' at other places so change it for parallel
vacuum accordingly.
Author: Bharath Rupireddy
Reviewed-by: Dilip Kumar, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CALj2ACWz=PYrrFXVsEKb9J1aiX4raA+UBe02hdRp_zqDkrWUiw@mail.gmail.com
"typename" is a C++ keyword, so pg_upgrade.h fails to compile in C++.
Fortunately, there seems no likely reason for somebody to need to
do that. Nonetheless, it's project policy that all .h files should
pass cpluspluscheck, so rename the argument to fix that.
Oversight in 57c081de0; back-patch as that was. (The policy requiring
pg_upgrade.h to pass cpluspluscheck only goes back to v12, but it
seems best to keep this code looking the same in all branches.)
Commits 29aeda6e4 et al closed up some oversights involving not checking
for non-upgradable types within container types, such as arrays and
ranges. However, I only looked at version.c, failing to notice that
there were substantially-equivalent tests in check.c. (The division
of responsibility between those files is less than clear...)
In addition, because genbki.pl does not guarantee that auto-generated
rowtype OIDs will hold still across versions, we need to consider that
the composite type associated with a system catalog or view is
non-upgradable. It seems unlikely that someone would have a user
column declared that way, but if they did, trying to read it in another
PG version would likely draw "no such pg_type OID" failures, thanks
to the type OID embedded in composite Datums.
To support the composite and reg*-type cases, extend the recursive
query that does the search to allow any base query that returns
a column of pg_type OIDs, rather than limiting it to exactly one
starting type.
As before, back-patch to all supported branches.
Discussion: https://postgr.es/m/2798740.1619622555@sss.pgh.pa.us
pg_checksums uses two counters, total size and current size,
to calculate the progress. Previously the progress that
pg_checksums reported could not reach 100% at the end.
The cause of this issue was that the sizes of only pages excluding
new ones in each file were counted as the current size
while the size of each file is counted as the total size.
That is, the total size of all new pages could be reported
as the difference between the total size and current size.
This commit fixes this issue by making pg_checksums count
the sizes of all pages including new ones in each file as
the current size.
Back-patch to v12 where progress reporting was added to pg_checksums.
Reported-by: Shinya Kato
Author: Shinya Kato
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/TYAPR01MB289656B1ACA0A5E7CAD07BE3C47A9@TYAPR01MB2896.jpnprd01.prod.outlook.com
Despite the clear comments pointing out that the duplicative code
segments in ReadHead() and _discoverArchiveFormat() needed to be
in sync, they were not: the latter did not bother to apply any of
the sanity checks in the former. We'd missed noticing this partly
because none of those checks would fail in scenarios we customarily
test, and partly because the oversight would be masked if both
segments execute, which they would in cases other than needing to
autodetect the format of a non-seekable stdin source. However,
in a case meeting all these requirements --- for example, trying
to read a newer-than-supported archive format from non-seekable
stdin --- pg_restore missed applying the version check and would
likely dump core or otherwise misbehave.
The whole thing is silly anyway, because there seems little reason
to duplicate the logic beyond the one-line verification that the
file starts with "PGDMP". There seems to have been an undocumented
assumption that multiple major formats (major enough to require
separate reader modules) would nonetheless share the first half-dozen
fields of the custom-format header. This seems unlikely, so let's
fix it by just nuking the duplicate logic in _discoverArchiveFormat().
Also get rid of the pointless attempt to seek back to the start of
the file after successful autodetection. That wastes cycles and
it means we have four behaviors to verify not two.
Per bug #16951 from Sergey Koposov. This has been broken for
decades, so back-patch to all supported versions.
Discussion: https://postgr.es/m/16951-a4dd68cf0de23048@postgresql.org
Jasen Betts reported yet another unintended side effect of commit
85c54287a: reconnecting with "\c service=whatever" did not have the
expected results. The reason is that starting from the output of
PQconndefaults() effectively allows environment variables (such
as PGPORT) to override entries in the service file, whereas the
normal priority is the other way around.
Not using PQconndefaults at all would require yet a third main code
path in do_connect's parameter setup, so I don't really want to fix
it that way. But we can have the logic effectively ignore all the
default values for just a couple more lines of code.
This patch doesn't change the behavior for "\c -reuse-previous=on
service=whatever". That remains significantly different from before
85c54287a, because many more parameters will be re-used, and thus
not be possible for service entries to replace. But I think this
is (mostly?) intentional. In any case, since libpq does not report
where it got parameter values from, it's hard to do differently.
Per bug #16936 from Jasen Betts. As with the previous patches,
back-patch to all supported branches. (9.5 is unfortunately now
out of support, so this won't get fixed there.)
Discussion: https://postgr.es/m/16936-3f524322a53a29f0@postgresql.org