Non-global default privilege entries should be dumped as-is,
not made relative to the default ACL for their object type.
This would typically only matter if one had revoked some
on-by-default privileges in a global entry, and then wanted
to grant them again in a non-global entry.
Per report from Boris Korzun. This is an old bug, so back-patch
to all supported branches.
Neil Chen, test case by Masahiko Sawada
Discussion: https://postgr.es/m/111621616618184@mail.yandex.ru
Discussion: https://postgr.es/m/CAA3qoJnr2+1dVJObNtfec=qW4Z0nz=A9+r5bZKoTSy5RDjskMw@mail.gmail.com
Along the same lines as 047329624, ed2c7f65b and daa9fe8a5, reduce
code duplication by having just one copy of the parts of the query
that are the same across all server versions; and make the
conditionals control the smallest possible amount of code.
This also gets rid of the confusing assortment of different ways
to accomplish the same result that we had here before.
While at it, make sure all three relevant parts of the function
list the fields in the same order. This is just neatnik-ism,
of course.
Discussion: https://postgr.es/m/1240992.1634419055@sss.pgh.pa.us
If the blob TOC file cannot be parsed, the error message was failing
to print the filename as the variable holding it was shadowed by the
destination buffer for parsing. When the filename fails to parse,
the error will print an empty string:
./pg_restore -d foo -F d dump
pg_restore: error: invalid line in large object TOC file "": ..
..instead of the intended error message:
./pg_restore -d foo -F d dump
pg_restore: error: invalid line in large object TOC file "dump/blobs.toc": ..
Fix by renaming both variables as the shared name was too generic to
store either and still convey what the variable held.
Backpatch all the way down to 9.6.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/A2B151F5-B32B-4F2C-BA4A-6870856D9BDE@yesql.se
Backpatch-through: 9.6
Make sure that the string parsing is limited by the size of the
destination buffer.
In pg_basebackup the available values sent from the server
is limited to two characters so there was no risk of overflow.
In pg_dump the buffer is bounded by MAXPGPATH, and thus the limit
must be inserted via preprocessor expansion and the buffer increased
by one to account for the terminator. There is no risk of overflow
here, since in this case, the buffer scanned is smaller than the
destination buffer.
Backpatch the pg_basebackup fix to 11 where it was introduced, and
the pg_dump fix all the way down to 9.6.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/B14D3D7B-F98C-4E20-9459-C122C67647FB@yesql.se
Backpatch-through: 11 and 9.6
The tests added by c0280bc and d9ddc50 in 001_basic.pl have introduced
commands calling directly psql, making them sensitive to the
environment. One issue was that those commands forgot -X to not use a
local .psqlrc, causing all those tests to fail if psql cannot properly
parse this file.
TAP tests should be designed so as they run in an isolated fashion,
without any dependency on the environment where they are run. As
PostgresNode::psql gives already all the facilities those new tests
need, switch to that instead of calling plain psql commands where
interactions with a backend are needed. The test is slightly refactored
to be able to check after the expected patterns of stdout and stderr,
keeping the same amount of coverage as previously.
Reported-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-Wzn8ftvcDPwomn+y04JJzbT=TG7TN=QsmSEATUOW-ZuvQQ@mail.gmail.com
It was clearly the intent to do so all along, but the original coding
fat-fingered this by checking the wrong array element. We fixed it
in passing in 403a3d91c, but that later got reverted, and we forgot
to keep this bug fix.
Most of the time this'd be relatively harmless, since once we lock
any of the partitioned table's leaf partitions, that would suffice
to prevent major DDL on the partitioned table itself. However, a
childless partitioned table would get dumped with no relevant lock
whatsoever, possibly allowing dump failure or inconsistent output.
Unlike 403a3d91c, there are no versioning concerns, since every server
version that has partitioned tables will allow you to lock one.
Back-patch to v10 where partitioned tables were introduced.
Discussion: https://postgr.es/m/1018205.1634346327@sss.pgh.pa.us
Avoid calling contrib/amcheck functions with relations that are
unsuitable for checking. Specifically, don't attempt verification of
temporary relations, or indexes whose pg_index entry indicates that the
index is invalid, or not ready.
These relations are not supported by any of the contrib/amcheck
functions, for reasons that are pretty fundamental. For example, the
implementation of REINDEX CONCURRENTLY can add its own "transient"
pg_index entries, which has rather unclear implications for the B-Tree
verification functions, at least in the general case -- so they just
treat it as an error. It falls to the amcheck caller (in this case
pg_amcheck) to deal with the situation at a higher level.
pg_amcheck now simply treats these conditions as additional "visibility
concerns" when it queries system catalogs. This is a little arbitrary.
It seems to have the least problems among any of the available
alternatives.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Reported-By: Alexander Lakhin <exclusion@gmail.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Bug: #17212
Discussion: https://postgr.es/m/17212-34dd4a1d6bba98bf@postgresql.org
Backpatch: 14-, where pg_amcheck was introduced.
This fixes a set of issues that cause different breakages or annoyances
when using pg_upgrade's test.sh to do upgrades across different major
versions:
- test.sh is completely broken when using v14 as new version because of
the removal of testtablespace/ as Makefile rule. Older versions of
pg_regress don't support --make-tablespacedir, blocking the creation of
the tablespace. In order to fix that, it is simple enough to create
those directories in the script itself, but only do that when an old
version is involved. This fix is needed on HEAD and REL_14_STABLE.
- The script would fail when using PG <= v11 as old version because of
WITH OIDS relations not supported in v12. In order to fix this, this
steals a method from the buildfarm that uses a DO block to change all
the relations marked as WITH OIDS, allowing pg_upgrade to pass. This is
more portable than using ALTER TABLE queries on the relations causing
issues. This is fixed down to v12, and authored originally by Andrew
Dunstan.
- Not using --extra-float-digits=0 with v11 as old version causes
a lot of diffs in the dumps, making the whole unreadable. This gets
only done when using v11 as old version. This is fixed down to v12.
The buildfarm code uses that already.
Note that the addition of --wal-segsize and --allow-group-access breaks
the script when using v10 or older at initdb time as these got added in
11. 10 would be EOL'd next year and nobody has complained about those
problems yet, so nothing is done about that. This means that this
commit fixes upgrade tests using test.sh with v11 as minimum older
version, up to HEAD, and that it is enough to apply this change down to
12. The old and new dumps still generate diffs, still require manual
checks, and more could be done to reduce the noise, but this allows the
tests to run with a rather minimal amount of them.
I have tested this commit and test.sh with v11 as minimum across all the
branches where this is applied. Note that this commit has no impact on
the normal pg_upgrade test run with a simple "make check".
Author: Justin Pryzby, Andrew Dunstan, Michael Paquier
Discussion: https://postgr.es/m/20201206180248.GI24052@telsasoft.com
Backpatch-through: 12
A repeated complaint was that scan-build thought that if the \timing
setting changes during processing of a query, the post-processing
might read garbage time values. This is probably not possible right
now, but it's not entirely inconceivable given the code structure. So
silence this warning with small restructuring that makes this more
robust. The other warnings were a few dead stores that are easy to
remove.
Discussion: https://www.postgresql.org/message-id/2570e2ae-fa0f-aac9-f72f-bb59a9983a20@enterprisedb.com
Incrementing the level of the call stack reported is useful for
debugging purposes as it allows to control which part of the test is
exactly failing, especially if a test is structured with subroutines
that call routines from Test::More.
This adds more incrementations of $Test::Builder::Level where debugging
gets improved (for example it does not make sense for some paths like
pg_rewind where long subroutines are used).
A note is added to src/test/perl/README about that, based on a
suggestion from Andrew Dunstan and a wording coming from both of us.
Usage of Test::Builder::Level has spread in 12, so a backpatch down to
this version is done.
Reviewed-by: Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson
Discussion: https://postgr.es/m/YV1CCFwgM1RV1LeS@paquier.xyz
Backpatch-through: 12
Like BASE_BACKUP, CREATE_REPLICATION_SLOT has historically used a
hard-coded syntax. To improve future extensibility, adopt a flexible
options syntax here, too.
In the new syntax, instead of three mutually exclusive options
EXPORT_SNAPSHOT, USE_SNAPSHOT, and NOEXPORT_SNAPSHOT, there is now a single
SNAPSHOT option with three possible values: 'export', 'use', and 'nothing'.
This commit does not remove support for the old syntax. It just adds
the new one as an additional option, makes pg_receivewal,
pg_recvlogical, and walreceiver processes use it.
Patch by me, reviewed by Fabien Coelho, Sergei Kornilov, and
Fujii Masao.
Discussion: http://postgr.es/m/CA+TgmobAczXDRO_Gr2euo_TxgzaH1JxbNxvFx=HYvBinefNH8Q@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Previously, BASE_BACKUP used an entirely hard-coded syntax, but that's
hard to extend. Instead, adopt the same kind of syntax we've used for
SQL commands such as VACUUM, ANALYZE, COPY, and EXPLAIN, where it's
not necessary for all of the option names to be parser keywords.
In the new syntax, most of the options now take an optional Boolean
argument. To match our practice in other in places, the options which
the old syntax called NOWAIT and NOVERIFY_CHECKSUMS options are in the
new syntax called WAIT and VERIFY_CHECKUMS, and the default value is
false. In the new syntax, the FAST option has been replaced by a
CHECKSUM option whose value may be 'fast' or 'spread'.
This commit does not remove support for the old syntax. It just adds
the new one as an additional option, and makes pg_basebackup prefer
the new syntax when the server is new enough to support it.
Patch by me, reviewed and tested by Fabien Coelho, Sergei Kornilov,
Fujii Masao, and Tushar Ahuja.
Discussion: http://postgr.es/m/CA+TgmobAczXDRO_Gr2euo_TxgzaH1JxbNxvFx=HYvBinefNH8Q@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmoZGwR=ZVWFeecncubEyPdwghnvfkkdBe9BLccLSiqdf9Q@mail.gmail.com
Per discussion, let's just follow CLDR's default zone mappings
faithfully. There are two changes here that are clear improvements:
* Mapping "Greenwich Standard Time" to Atlantic/Reykjavik is actually
a better fit than using London, because Iceland hasn't observed DST
since 1968, so this is more nearly what people might expect.
* Since the "Samoa" zone is specified to be UTC+13:00, we must map
it to Pacific/Apia not Pacific/Samoa; the latter refers to American
Samoa which is now on the other side of the date line.
The rest of these changes look like they're choosing the most populous
IANA zone as representative. Whatever the details, we're just going
to say "if you don't like this mapping, complain to CLDR".
Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
This corrects a bunch of entries in win32_tzmap[], and adds a few
new ones, based on the CLDR project's windowsZones.xml file.
Non-cosmetic changes fall into four main categories:
* Flat-out errors:
US/Aleutan doesn't exist
America/Salvador doesn't exist
Asia/Baku is wrong for Yerevan
Asia/Dhaka (Bangladesh) is wrong for Astana (Kazakhstan)
Europe/Bucharest is wrong for Chisinau
America/Mexico_City is wrong for Chetumal
America/Buenos_Aires is wrong for Cayenne
America/Caracas has its own zone, so poor fit for La Paz
US/Eastern is wrong for Haiti
US/Eastern is wrong for Indiana (East)
Asia/Karachi is wrong for Tashkent
Etc/UTC+12 doesn't exist
Signs of Etc/GMT zones were backwards
* Judgment calls:
(These changes follow CLDR's choices, except for the first one)
Use Europe/London for "Greenwich Standard Time", since that seems much
more likely than Africa/Casablanca to be what people will think that
zone name means. CLDR has Atlantic/Reykjavik here, but that's no better.
Asia/Shanghai seems a better fit than Hong Kong for "China Standard
Time".
Europe/Sarajevo is now a link to Belgrade, ie "Central Europe Standard
Time"; so use Warsaw for "Central European Standard Time".
America/Sao_Paulo seems more representative than Araguaina for
"E. South America Standard Time".
Africa/Johannesburg seems more representative than Harare for
"South Africa Standard Time".
* New Windows zone names:
"Israel Standard Time"
"Kaliningrad Standard Time"
"Russia Time Zone N" for various N
"Singapore Standard Time"
"South Sudan Standard Time"
"W. Central Africa Standard Time"
"West Bank Standard Time"
"Yukon Standard Time"
Some of these replace older spellings, but I kept the older spellings
too in case our code runs on a machine with the older data.
* Replace aliases (tzdb Links) with underlying city-named zones:
(This tracks tzdb's longstanding practice, and reduces inconsistency
with the rest of the entries, as well as with CLDR.)
US/Alaska
Asia/Kuwait
Asia/Muscat
Canada/Atlantic
Australia/Canberra
Canada/Saskatchewan
US/Central
US/Eastern
US/Hawaii
US/Mountain
Canada/Newfoundland
US/Pacific
Back-patch to all supported branches, as is our usual practice for
time zone data updates.
Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
The original intent seems to have been to sort case-insensitively
by the Windows zone name, but various changes over the years did
not get that memo. This commit just moves a few entries to
restore exact alphabetic order, to ease comparison to the outputs
of processing scripts.
Back-patch to all supported branches, as is our usual practice for
time zone data updates.
Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
Previously socket errors such as invalid socket or socket wait method failures
during benchmark caused pgbench to exit with status 0. Instead, errors during
the run should result in exit status 2.
Back-patch to v12 where pgbench started reporting exit status.
Original complaint and patch by Hayato Kuroda.
Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870057375ACA8A73099C649F5349@TYCPR01MB5870.jpnprd01.prod.outlook.com
The failure of socket wait method like "select()" doesn't terminate pgbench.
So the log level of error message when that failure happens should be ERROR.
But previously FATAL was used in that case.
Back-patch to v13 where pgbench started using common logging API.
Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/20210617005934.8bd37bf72efd5f1b38e6f482@sraoss.co.jp
That allows to include just visibilitymapdefs.h from file.c, and in turn,
remove include of postgres.h from relcache.h.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/20210913232614.czafiubr435l6egi%40alap3.anarazel.de
Author: Alexander Korotkov
Reviewed-by: Andres Freund, Tom Lane, Alvaro Herrera
Backpatch-through: 13
This addresses two issues with the tests added in 0c39c292 for runtime
GUCs:
- Re-enable the test on Msys. The test could fail because of \r\n
generated by Msys perl. 0d91c52a has taken care of this issue.
- Allow the test to run in the context of a privileged account. CIs
running under privileged accounts would fail on permission failures, as
reported by Andres Freund. This issue is fixed by wrapping the postgres
command within pg_ctl as the latter will take care of any permissions
needed. The test checking a failure of postgres -C for a runtime
parameter with an instance running is removed, as pg_ctl produces an
unstable error code (no need for a CI to reproduce that).
Discussion: https://postgr.es/m/20210921032040.lyl4lcax37aedx2x@alap3.anarazel.de
The following things are improved:
- Fetch the system identifier from the source server before any
WAL streaming loop. This triggers extra checks to make sure that
pg_receivewal is still connected to a server with the same system ID
with a correct timeline.
- Switch umask() (for file creation mode mask) and RetrieveWalSegSize()
(to fetch the size of WAL segments) a bit later before the initial
stream attempt. If the connection was done with a database,
pg_receivewal would fail but those commands were still executed, which
was a waste. The slot creation and drop are now done before retrieving
the segment size.
Author: Bharath Rupireddy
Reviewed-by: Ronan Dunklau, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACX00YYeyBfoi55Cy=NrP-FcfMgiYYx1qRUEib3yjCVoaA@mail.gmail.com
A WAL segment closed during a WAL stream for pg_receivewal would
generate incorrect error messages depending on the context, as the file
name used when referring to a WAL segment ignored partial files or the
compression method used. In such cases, the error message generated
(failure on close, seek or rename) would not match a physical file
name. The same code paths are used by pg_basebackup, but it uses no
partial suffix so it is not impacted.
7fbe0c8 has introduced in walmethods.c a callback to get the exact
physical file name used for a given context, this commit makes use of it
to improve those error messages. This could be extended to more code
paths of pg_basebackup/ in the future, if necessary.
Extracted from a larger patch by the same author.
Author: Georgios Kokolatos
Discussion: https://postgr.es/m/ZCm1J5vfyQ2E6dYvXz8si39HQ2gwxSZ3IpYaVgYa3lUwY88SLapx9EEnOf5uEwrddhx2twG7zYKjVeuP5MwZXCNPybtsGouDsAD1o2L_I5E=@pm.me
The output generated on Msys is incorrect because of the different way
IPC::Run processes outputs with native Perl (converts natively \r\n to
\n) and Msys perl (\r\n kept as-is), causing this test to fail.
For now, just disable the test to bring the buildfarm to a green state.
I think that the correct long-term solution would be to tweak all the
routines command_checks_* in PostgresNode.pm to handle this output like
psql does when using Msys, by discarding \r automatically before
comparing it.
Per report from jacana and fairywren. Thanks to Tom Lane for the ping.
Discussion: https://postgr.es/m/1252480.1631829409@sss.pgh.pa.us
Until now, the -C option of postgres was handled before a small subset
of GUCs computed at runtime are initialized, leading to incorrect
results as GUC machinery would fall back to default values for such
parameters.
For example, data_checksums could report "off" for a cluster as the
control file is not loaded yet. Or wal_segment_size would show a
segment size at 16MB even if initdb --wal-segsize used something else.
Worse, the command would fail to properly report the recently-introduced
shared_memory, that requires to load shared_preload_libraries as these
could ask for a chunk of shared memory.
Support for runtime GUCs comes with a limitation, as the operation is
now allowed on a running server. One notable reason for this is that
_PG_init() functions of loadable libraries are called before all
runtime-computed GUCs are initialized, and this is not guaranteed to be
safe to do on running servers. For the case of shared_memory_size,
where we want to know how much memory would be used without allocating
it, this limitation is fine. Another case where this will help is for
huge pages, with the introduction of a different GUC to evaluate the
amount of huge pages required for a server before starting it, without
having to allocate large chunks of memory.
This feature is controlled with a new GUC flag, and four parameters are
classified as runtime-computed as of this change:
- data_checksums
- shared_memory_size
- data_directory_mode
- wal_segment_size
Some TAP tests are added to provide some coverage here, using
data_checksums in the tests of pg_checksums.
Per discussion with Andres Freund, Justin Pryzby, Magnus Hagander and
more.
Author: Nathan Bossart
Discussion: https://postgr.es/m/F2772387-CE0F-46BF-B5F1-CC55516EB885@amazon.com
Also remove obsolete comments about why the 64-bit integers need to be
printed in a separate buffer. The reason used to be portability, but
now the remaining reason is that we need the string lengths for the
progress displays. That is evident by looking at the code right
below, so a new comment doesn't seem necessary.
These are added in the existing tests of pg_ctl for log rotation, that
already tested stderr. The same amount of coverage is added for csvlog:
- Checks for pg_current_logfile().
- Log rotation with expected file name.
- Log contents generated.
This test is refactored to minimize the amount of work required to add
tests for new log formats, easing some upcoming work.
Author: Michael Paquier, Sehrope Sarkuni
Discussion: https://postgr.es/m/CAH7T-aqswBM6JWe4pDehi1uOiufqe06DJWaU5=X7dDLyqUExHg@mail.gmail.com
This switches the default ACL to what the documentation has recommended
since CVE-2018-1058. Upgrades will carry forward any old ownership and
ACL. Sites that declined the 2018 recommendation should take a fresh
look. Recipes for commissioning a new database cluster from scratch may
need to create a schema, grant more privileges, etc. Out-of-tree test
suites may require such updates.
Reviewed by Peter Eisentraut.
Discussion: https://postgr.es/m/20201031163518.GB4039133@rfd.leadboat.com
When throttling is used, transactions that lag behind schedule by
more than the latency limit are counted and reported as skipped.
Previously, there was the case where pgbench counted all skipped
transactions even if the timer specified in -T option was exceeded.
This could take a very long time to do that especially when unrealistically
high rate setting in -R option caused quite a lot of transactions that
lagged behind schedule. This could prevent pgbench from ending
immediately, and so pgbench could look like it got stuck to users.
To fix the issue, this commit changes pgbench so that it stops counting
skipped transactions as soon as the timer is exceeded. The timer can
make pgbench end soon even when there are lots of skipped transactions
that have not been counted yet.
Note that there is no guarantee that all skipped transactions are
counted under -T though there is under -t. This is OK in practice
because it's very unlikely to happen with realistic setting. Also this is
not the issue that this commit newly introdues. There used to be
the case where pgbench ended without counting all skipped
transactions since before.
Back-patch to v14. Per discussion, we decided not to bother
back-patch to the stable branches because it's hard to imagine
the issue happens in practice (with realistic setting).
Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Greg Sabino Mullane, Fujii Masao
Discussion: https://postgr.es/m/20210613040151.265ff59d832f835bbcf8b3ba@sraoss.co.jp
Coverity complained that one caller of getFormattedTypeName() failed
to free the returned string. Which is true, but rather than fixing
that one, let's get rid of this tedious and error-prone requirement.
Now that getFormattedTypeName() caches its result, strdup'ing that
result and expecting the caller to free it accomplishes little except
to waste cycles. We do create a leak in the case where getTypes didn't
make a TypeInfo for the type, but that basically shouldn't ever happen.
Back-patch, as commit 6c450a861 was. This isn't a particularly
interesting bug fix, but the API change seems like a hazard for
future back-patching activity if we don't back-patch it.
Various psql backslash commands have both single-letter and long
forms, for example \e and \edit. Previously, tab completion
generally offered the single-letter form but not the long form.
It seems more sensible to offer the long form, because (a) no
useful completion can happen when you've already typed the single
letter, and (b) if you're not so familiar with the command set
as to know that, the long form is likely to be less confusing.
Haiying Tang, reviewed by Dagfinn Ilmari Mannsåker and myself
Discussion: https://postgr.es/m/OS0PR01MB61136018064660F095CB57A8FB129@OS0PR01MB6113.jpnprd01.prod.outlook.com
Previously pgwin32_is_service() would falsely return true when postgres is
started from somewhere within a service, but not as a service. That is
e.g. always the case with windows docker containers, which some CI services
use to run windows tests in.
When postgres falsely thinks its running as a service, no messages are
writting to stdout / stderr. That can be very confusing and causes a few tests
to fail.
To fix additionally check if stderr is invalid in pgwin32_is_service(). For
that to work in backend processes, pg_ctl is changed to pass down handles so
that postgres can do the same check (otherwise "default" handles are created).
While this problem exists in all branches, there have been no reports by
users, the prospective CI usage currently is only for master, and I am not a
windows expert. So doing the change in only master for now seems the sanest
approach.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Magnus Hagander <magnus@hagander.net>
Discussion: https://postgr.es/m/20210305185752.3up5eq2eanb7ofmb@alap3.anarazel.de
Replace fixed-length command buffers with psprintf() calls. We didn't
have anything as convenient as psprintf() when this code was written,
but now that we do, there's little reason for the limitation to
stand. Removing it eliminates some corner cases where (for example)
starting the postmaster with a whole lot of options fails.
Most individual file names that pg_ctl deals with are still restricted
to MAXPGPATH, but we've seldom had complaints about that limitation
so long as it only applies to one filename.
Back-patch to all supported branches.
Phil Krylov
Discussion: https://postgr.es/m/567e199c6b97ee19deee600311515b86@krylov.eu
While populating the pgbench_accounts table, plain COPY was
unconditionally used. By changing it to COPY FREEZE, the time for
VACUUM is significantly reduced, thus the total time of "pgbench -i"
is also reduced. This only happens if pgbench runs against PostgreSQL
14 or later because COPY FREEZE in previous versions of PostgreSQL does
not bring the benefit. Also if partitioning is used, COPY FREEZE
cannot be used. In this case plain COPY will be used too.
Author: Tatsuo Ishii
Discussion: https://postgr.es/m/20210308.143907.2014279678657453983.t-ishii@gmail.com
Reviewed-by: Fabien COELHO, Laurenz Albe, Peter Geoghegan, Dean Rasheed
When -C/--connect option is specified, pgbench establishes and closes
the connection for each transaction. In this case pgbench needs to
measure the times taken for all those connections and disconnections,
to include the average connection time in the benchmark result.
But previously pgbench could not measure those disconnection delays.
To fix the bug, this commit makes pgbench measure the disconnection
delays whenever the connection is closed at the end of transaction,
if -C/--connect option is specified.
Back-patch to v14. Per discussion, we concluded not to back-patch to v13
or before because changing that behavior in stable branches would
surprise users rather than providing benefits.
Author: Yugo Nagata
Reviewed-by: Fabien COELHO, Tatsuo Ishii, Asif Rehman, Fujii Masao
Discussion: https://postgr.es/m/20210614151155.a393bc7d8fed183e38c9f52a@sraoss.co.jp
The code printing expressions for extended statistics doubled the
parens, producing results like ((a+1)), which is unnecessary and not
consistent with how we print expressions elsewhere.
Fixed by tweaking the code to produce just a single set of parens.
Reported by Mark Dilger, fix by me. Backpatch to 14, where support for
extended statistics on expressions was added.
Reported-by: Mark Dilger
Discussion: https://postgr.es/m/20210122040101.GF27167%40telsasoft.com
For no particularly good reason, getPolicies() queried pg_policy
separately for each table. We can collect all the policies in
a single query instead, and attach them to the correct TableInfo
objects using findTableByOid() lookups. On the regression
database, this reduces the number of queries substantially, and
provides a visible savings even when running against a local
server.
Per complaint from Hubert Depesz Lubaczewski. Since this is such
a simple fix and can have a visible performance benefit, back-patch
to all supported branches.
Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com