Adds --clone/--copy-file-range options to pg_combinebackup, to allow
copying files using file cloning or copy_file_range(). These methods may
be faster than the standard block-by-block copy, but the main advantage
is that they enable various features provided by CoW filesystems.
This commit only uses these copy methods for files that did not change
and can be copied as a whole from a single backup.
These new copy methods may not be available on all platforms, in which
case the command throws an error (immediately, even if no files would be
copied as a whole). This early failure seems better than failing later
when trying to copy the first file, after performing a lot of work on
earlier files.
If the requested copy method is available, but a checksum needs to be
recalculated (e.g. because of a different checksum type), the file is
still copied using the requested method, but it is also read for the
checksum calculation. Depending on the filesystem this may be more
expensive than just performing the simple copy, but it does enable the
CoW benefits.
Initial patch by Jakub Wartak, various reworks and improvements by me.
Author: Tomas Vondra, Jakub Wartak
Reviewed-by: Thomas Munro, Jakub Wartak, Robert Haas
Discussion: https://postgr.es/m/3024283a-7491-4240-80d0-421575f6bb23%40enterprisedb.com
Align blocks stored in incremental files to BLCKSZ, so that the
incremental backups work well with CoW filesystems.
The header of the incremental file is padded with \0 to a multiple of
BLCKSZ, so that the block data (also BLCKSZ) is aligned to BLCKSZ. The
padding is added only to files containing block data, so files with just
the header remain small. This adds a bit of extra space, but as the
number of blocks increases the overhead gets negligible very quickly.
And as the padding is \0 bytes, it does compress extremely well.
The alignment is important for CoW filesystems that usually require the
blocks to be aligned to filesystem page size for features like block
sharing, deduplication etc. to work well. With the variable sized header
the blocks in the increments were not aligned at all, negating the
benefits of the CoW filesystems.
This matters even for non-CoW filesystems, for example when placed on a
RAID array. If the block is not aligned, it may easily span multiple
devices, causing read and write amplification.
It might be better to align the blocks to the filesystem page, not
BLCKSZ, but we have no good way to determine that. Even if we determine
the page size at the time of taking the backup, the backup may move. For
now the BLCKSZ seems sufficient - the filesystem page is usually 4K, so
the default BLCKSZ (8K by default) is aligned to that.
Author: Tomas Vondra
Reviewed-by: Robert Haas, Jakub Wartak
Discussion: https://postgr.es/m/3024283a-7491-4240-80d0-421575f6bb23%40enterprisedb.com
This changes the three callers to json_parse_manifest() to use
json_parse_manifest_incremental_chunk() if appropriate. In the case of
the backend caller, since we don't know the size of the manifest in
advance we always call the incremental parser.
Author: Andrew Dunstan
Reviewed-By: Jacob Champion
Discussion: https://postgr.es/m/7b0a51d6-0d9d-7366-3a1a-f74397a02f55@dunslane.net
Previously, binaryheap didn't support updating a key and removing a
node in an efficient way. For example, in order to remove a node from
the binaryheap, the caller had to pass the node's position within the
array that the binaryheap internally has. Removing a node from the
binaryheap is done in O(log n) but searching for the key's position is
done in O(n).
This commit adds a hash table to binaryheap in order to track the
position of each nodes in the binaryheap. That way, by using newly
added functions such as binaryheap_update_up() etc., both updating a
key and removing a node can be done in O(1) on an average and O(log n)
in worst case. This is known as the indexed binary heap. The caller
can specify to use the indexed binaryheap by passing indexed = true.
The current code does not use the new indexing logic, but it will be
used by an upcoming patch.
Reviewed-by: Vignesh C, Peter Smith, Hayato Kuroda, Ajin Cherian,
Tomas Vondra, Shubham Khanna
Discussion: https://postgr.es/m/CAD21AoDffo37RC-eUuyHJKVEr017V2YYDLyn1xF_00ofptWbkg%40mail.gmail.com
After installing the SIGINT handler in psql, SIGINT can no longer cancel
database reconnections. For instance, if the user starts a reconnection
and then needs to do some form of interaction (ie psql is polling),
there is no way to cancel the reconnection process currently.
Use PQconnectStartParams() in order to insert a cancel_pressed check
into the polling loop.
Tristan Partin, reviewed by Gurjeet Singh, Heikki Linnakangas, Jelte
Fennema-Nio, and me.
Discussion: http://postgr.es/m/D08WWCPVHKHN.3QELIKZJ2D9RZ@neon.tech
This patch allows pg_restore to wrap its commands into transaction
blocks, somewhat like --single-transaction, except that we commit
and start a new block after every N objects. Using this mode
with a size limit of 1000 or so objects greatly reduces the number
of transactions consumed by the restore, while preventing any
one transaction from taking enough locks to overrun the receiving
server's shared lock table.
(A value of 1000 works well with the default lock table size of
around 6400 locks. Higher --transaction-size values can be used
if one has increased the receiving server's lock table size.)
Excessive consumption of XIDs has been reported as a problem for
pg_upgrade in particular, but it could be bad for any restore; and the
change also reduces the number of fsyncs and amount of WAL generated,
so it should provide speed benefits too.
This patch does not try to make parallel workers batch the SQL
commands they issue. The trouble with doing that is that other
workers may need to see the objects a worker creates right away.
Possibly this can be improved later.
In this patch I have hard-wired pg_upgrade to use a transaction size
of 1000 divided by the number of parallel restore jobs allowed
(without that, we'd still be at risk of overrunning the shared lock
table). Perhaps there would be value in adding another pg_upgrade
option to allow user control of that, but I'm unsure that it's worth
the trouble; I think few users would use it, and any who did would see
not that much benefit compared to the default.
Patch by me, but the original idea to batch SQL commands during
restore is due to Robins Tharakan.
Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
Commit c0d5be5d6 caused pg_dump to create a separate BLOB metadata TOC
entry for each large object (blob), but it did not touch the ancient
decision to put all the blobs' data into a single "BLOBS" TOC entry.
This is bad for a few reasons: for databases with millions of blobs,
the TOC becomes unreasonably large, causing performance issues;
selective restore of just some blobs is quite impossible; and we
cannot parallelize either dump or restore of the blob data, since our
architecture for that relies on farming out whole TOC entries to
worker processes.
To improve matters, let's group multiple blobs into each blob metadata
TOC entry, and then make corresponding per-group blob data TOC entries.
Selective restore using pg_restore's -l/-L switches is then possible,
though only at the group level. (Perhaps we should provide a switch
to allow forcing one-blob-per-group for users who need precise
selective restore and don't have huge numbers of blobs. This patch
doesn't do that, instead just hard-wiring the maximum number of blobs
per entry at 1000.)
The blobs in a group must all have the same owner, since the TOC entry
format only allows one owner to be named. In this implementation
we also require them to all share the same ACL (grants); the archive
format wouldn't require that, but pg_dump's representation of
DumpableObjects does. It seems unlikely that either restriction
will be problematic for databases with huge numbers of blobs.
The metadata TOC entries now have a "desc" string of "BLOB METADATA",
and their "defn" string is just a newline-separated list of blob OIDs.
The restore code has to generate creation commands, ALTER OWNER
commands, and drop commands (for --clean mode) from that. We would
need special-case code for ALTER OWNER and drop in any case, so the
alternative of keeping the "defn" as directly executable SQL code
for creation wouldn't buy much, and it seems like it'd bloat the
archive to little purpose.
Since we require the blobs of a metadata group to share the same ACL,
we can furthermore store only one copy of that ACL, and then make
pg_restore regenerate the appropriate commands for each blob. This
saves space in the dump file not only by removing duplicative SQL
command strings, but by not needing a separate TOC entry for each
blob's ACL. In turn, that reduces client-side memory requirements for
handling many blobs.
ACL TOC entries that need this special processing are labeled as
"ACL"/"LARGE OBJECTS nnn..nnn". If we have a blob with a unique ACL,
continue to label it as "ACL"/"LARGE OBJECT nnn". We don't actually
have to make such a distinction, but it saves a few cycles during
restore for the easy case, and it seems like a good idea to not change
the TOC contents unnecessarily.
The data TOC entries ("BLOBS") are exactly the same as before,
except that now there can be more than one, so we'd better give them
identifying tag strings.
Also, commit c0d5be5d6 put the new BLOB metadata TOC entries into
SECTION_PRE_DATA, which perhaps is defensible in some ways, but
it's a rather odd choice considering that we go out of our way to
treat blobs as data. Moreover, because parallel restore handles
the PRE_DATA section serially, this means we'd only get part of the
parallelism speedup we could hope for. Move these entries into
SECTION_DATA, letting us parallelize the lo_create calls not just the
data loading when there are many blobs. Add dependencies to ensure
that we won't try to load data for a blob we've not yet created.
As this stands, we still generate a separate TOC entry for any comment
or security label attached to a blob. I feel comfortable in believing
that comments and security labels on blobs are rare, so this patch
should be enough to get most of the useful TOC compression for blobs.
We have to bump the archive file format version number, since existing
versions of pg_restore wouldn't know they need to do something special
for BLOB METADATA, plus they aren't going to work correctly with
multiple BLOBS entries or multiple-large-object ACL entries.
The directory and tar-file format handlers need some work
for multiple BLOBS entries: they used to hard-wire the file name
as "blobs.toc", which is replaced here with "blobs_<dumpid>.toc".
The 002_pg_dump.pl test script also knows about that and requires
minor updates. (I had to drop the test for manually-compressed
blobs.toc files with LZ4, because lz4's obtuse command line
design requires explicit specification of the output file name
which seems impractical here. I don't think we're losing any
useful test coverage thereby; that test stanza seems completely
duplicative with the gzip and zstd cases anyway.)
In passing, centralize management of the lo_buf used to hold data
while restoring blobs. The code previously had each format handler
create lo_buf, which seems rather pointless given that the format
handlers all make it the same way. Moreover, the format handlers
never use lo_buf directly, making this setup a failure from a
separation-of-concerns standpoint. Let's move the responsibility into
pg_backup_archiver.c, which is the only module concerned with lo_buf.
The reason to do this in this patch is that it allows a centralized
fix for the now-false assumption that we never restore blobs in
parallel. Also, get rid of dead code in DropLOIfExists: it's been a
long time since we had any need to be able to restore to a pre-9.0
server.
Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
Various error paths did not release resources before returning.
While it's likely that the program would just exit shortly later,
none of the functions in question have summary exit(1) calls,
so they should not be assuming that.
Ranier Vilela and Tom Lane, per reports from Coverity
Discussion: https://postgr.es/m/CAEudQAr2_SZFxB4kXJiL4+2UaNZxUk5UBJtj0oXyJYMGZu-03g@mail.gmail.com
This commit adds a new COPY option LOG_VERBOSITY, which controls the
amount of messages emitted during processing. Valid values are
'default' and 'verbose'.
This is currently used in COPY FROM when ON_ERROR option is set to
ignore. If 'verbose' is specified, a NOTICE message is emitted for
each discarded row, providing additional information such as line
number, column name, and the malformed value. This helps users to
identify problematic rows that failed to load.
Author: Bharath Rupireddy
Reviewed-by: Michael Paquier, Atsushi Torikoshi, Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/CALj2ACUk700cYhx1ATRQyRw-fBM%2BaRo6auRAitKGff7XNmYfqQ%40mail.gmail.com
This allows MERGE commands to include WHEN NOT MATCHED BY SOURCE
actions, which operate on rows that exist in the target relation, but
not in the data source. These actions can execute UPDATE, DELETE, or
DO NOTHING sub-commands.
This is in contrast to already-supported WHEN NOT MATCHED actions,
which operate on rows that exist in the data source, but not in the
target relation. To make this distinction clearer, such actions may
now be written as WHEN NOT MATCHED BY TARGET.
Writing WHEN NOT MATCHED without specifying BY SOURCE or BY TARGET is
equivalent to writing WHEN NOT MATCHED BY TARGET.
Dean Rasheed, reviewed by Alvaro Herrera, Ted Yu and Vik Fearing.
Discussion: https://postgr.es/m/CAEZATCWqnKGc57Y_JanUBHQXNKcXd7r=0R4NEZUVwP+syRkWbA@mail.gmail.com
After the upgrade, the failed test was ensuring that the changes made on
the publisher should be replicated to the subscriber. We missed waiting
for one of the subscriptions to catch up.
Per buildfarm
Author: Vignesh C
Reviewed-by: Kuroda Hayato
Discussion: https://postgr.es/m/CALDaNm0z=fLtio1h50K8WossUGXU+gy0H9y9=RYh1DDZiq2EDw@mail.gmail.com
Many other utilities use -d to specify the database to use, but
pgbench uses it to enable debug mode. This is causing some users
to accidentally enable it. This commit changes -d to accept the
database name and introduces --dbname. Debug mode can still be
enabled with --debug. This is a backward-incompatible change, but
it has been judged to be worth the trade-off, i.e., some scripts
that use pgbench will need to be updated.
Author: Greg Sabino Mullane
Reviewed-by: Tomas Vondra, Euler Taveira, Alvaro Herrera, David Christensen
Discussion: https://postgr.es/m/CAKAnmmLjAzwVtb%3DVEaeuCtnmOLpzkJ1uJ_XiQ362YdD9B72HSg%40mail.gmail.com
It's now possible to specify a table access method via
CREATE TABLE ... USING for a partitioned table, as well change it with
ALTER TABLE ... SET ACCESS METHOD. Specifying an AM for a partitioned
table lets the value be used for all future partitions created under it,
closely mirroring the behavior of the TABLESPACE option for partitioned
tables. Existing partitions are not modified.
For a partitioned table with no AM specified, any new partitions are
created with the default_table_access_method.
Also add ALTER TABLE ... SET ACCESS METHOD DEFAULT, which reverts to the
original state of using the default for new partitions.
The relcache of partitioned tables is not changed: rd_tableam is not
set, even if a partitioned table has a relam set.
Author: Justin Pryzby <pryzby@telsasoft.com>
Author: Soumyadeep Chakraborty <soumyadeep2007@gmail.com>
Author: Michaël Paquier <michael@paquier.xyz>
Reviewed-by: The authors themselves
Discussion: https://postgr.es/m/CAE-ML+9zM4wJCGCBGv01k96qQ3gFv4WFcFy=zqPHKeaEFwwv6A@mail.gmail.com
Discussion: https://postgr.es/m/20210308010707.GA29832%40telsasoft.com
It must be run on the target server and should be able to connect to
the source server (publisher) and the target server (subscriber). All
tables in the specified database(s) are included in the logical
replication setup. A pair of publication and subscription objects are
created for each database.
The main advantage of pg_createsubscriber over the common logical
replication setup is the initial data copy. It also reduces the
catchup phase.
Some prerequisites must be met to successfully run it. It is
basically the logical replication requirements. It starts creating a
publication using FOR ALL TABLES and a replication slot for each
specified database. Write recovery parameters into the target data
directory and start the target server. It specifies the LSN of the
last replication slot (replication start point) up to which the
recovery will proceed. Wait until the target server is promoted.
Create one subscription per specified database (using publication and
replication slot created in a previous step) on the target server.
Set the replication progress to the replication start point for each
subscription. Enable the subscription for each specified database on
the target server. And finally, change the system identifier on the
target server.
Author: Euler Taveira <euler.taveira@enterprisedb.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Shubham Khanna <khannashubham1197@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/5ac50071-f2ed-4ace-a8fd-b892cffd33eb@www.fastmail.com
Straight-forward index-level REINDEX is not supported with multiple jobs as
we cannot control the concurrent processing of multiple indexes depending on
the same relation. Instead, we dedicate the whole table to certain reindex
job. Thus, if indexes in the lists belong to different tables, that gives us
a fair level of parallelism.
This commit teaches get_parallel_object_list() to fetch table names for
indexes in the case of index-level REINDEX. The same tables are grouped
together in the output order, and the list of indexes is also rebuilt to
match that order. Later during processingof that list, we push indexes
belonging to the same table into the same job.
Discussion: https://postgr.es/m/CACG%3DezZU_VwDi-1PN8RUSE6mcYG%2BYx1NH_rJO4%2BKe-mKqLp%3DNw%40mail.gmail.com
Author: Maxim Orlov, Svetlana Derevyanko, Alexander Korotkov
Reviewed-by: Michael Paquier
It wasn't getting freed in one code path, which Coverity identified as
a resource leak. It's probably of little consequence, but re-ordering
the code into the correct sequence is no more work than dismissing the
complaint. Minor oversight in commit a145f424d.
While here, improve the unreasonably clunky coding of
FindDbnameInConnParams: use of an output parameter is unnecessary
and prone to uninitialized-variable problems.
Coverity identified this as a resource leak. It's surely of no
consequence given that the function is called only once per run, but
freeing the storage is no more work than dismissing the complaint.
Minor oversight in commit 347758b12.
Till now, the reason for replication slot invalidation is not tracked
directly in pg_replication_slots. A recent commit 007693f2a3 added
'conflict_reason' to show the reasons for slot conflict/invalidation, but
only for logical slots.
This commit adds a new column 'invalidation_reason' to show invalidation
reasons for both physical and logical slots. And, this commit also turns
'conflict_reason' text column to 'conflicting' boolean column (effectively
reverting commit 007693f2a3). The 'conflicting' column is true for
invalidation reasons 'rows_removed' and 'wal_level_insufficient' because
those make the slot conflict with recovery. When 'conflicting' is true,
one can now look at the new 'invalidation_reason' column for the reason
for the logical slot's conflict with recovery.
The new 'invalidation_reason' column will also be useful to track other
invalidation reasons in the future commit.
Author: Bharath Rupireddy
Reviewed-by: Bertrand Drouvot, Amit Kapila, Shveta Malik
Discussion: https://www.postgresql.org/message-id/ZfR7HuzFEswakt/a%40ip-10-97-1-34.eu-west-3.compute.internal
Discussion: https://www.postgresql.org/message-id/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com
Commit cca97ce6a665 allowed dbname in pg_basebackup connstring and in this
commit we allow it to be written in postgresql.auto.conf when -R option is
used. The database name in the connection string will be used by the
logical replication slot synchronization on standby.
The dbname will be recorded only if specified explicitly in the connection
string or environment variable.
Masahiko Sawada hasn't reviewed the code in detail but endorsed the idea.
Author: Vignesh C, Kuroda Hayato
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAB8KJ=hdKdg+UeXhReeHpHA6N6v3e0qFF+ZsPFHk9_ThWKf=2A@mail.gmail.com
Thanks to commits ea18eb7d62, b6ee30ec08, and 19a829a327, the
002_blocks.pl test now consistently passes, so we can remove this
temporary debugging code.
This reverts commit 5ddf9973477729cf161b4ad0a1efd52f4fea9c88.
Discussion: https://postgr.es/m/20240314210010.GA3056455%40nathanxps13
The builtin C.UTF-8 locale has similar semantics to the libc locale of
the same name. That is, code point sort order (fast, memcmp-based)
combined with Unicode semantics for character operations such as
pattern matching, regular expressions, and
LOWER()/INITCAP()/UPPER(). The character semantics are based on
Unicode simple case mappings.
The builtin provider's C.UTF-8 offers several important advantages
over libc:
* faster sorting -- benefits from additional optimizations such as
abbreviated keys and varstrfastcmp_c
* faster case conversion, e.g. LOWER(), at least compared with some
libc implementations
* available on all platforms with identical semantics, and the
semantics are stable, testable, and documentable within a given
Postgres major version
Being based on memcmp, the builtin C.UTF-8 locale does not offer
natural language sort order. But it is an improvement for most use
cases that might otherwise use libc's "C.UTF-8" locale, as well as
many use cases that use libc's "C" locale.
Discussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com
Reviewed-by: Daniel Vérité, Peter Eisentraut, Jeremy Schneider
The checks for data type usage were each connecting to all databases
in the cluster and running their query. On clusters which have a lot
of databases this can become unnecessarily expensive. This moves the
checks to run in a single connection instead to minimize setup and
teardown overhead.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/BB4C76F-D416-4F9F-949E-DBE950D37787@yesql.se
This checks that certain I/O-related Perl functions properly check
their return value. Some parts of the PostgreSQL code had been a bit
sloppy about that. The new perlcritic warnings are fixed here. I
didn't design any beautiful error messages, mostly just used "or die
$!", which mostly matches existing code, and also this is
developer-level code, so having the system error plus source code
reference should be ok.
Initially, we only activate this check for a subset of what the
perlcritic check would warn about. The effective list is
chmod flock open read rename seek symlink system
The initial set of functions is picked because most existing code
already checked the return value of those, so any omissions are
probably unintended, or because it seems important for test
correctness.
The actual perlcritic configuration is written as an exclude list.
That seems better so that we are clear on what we are currently not
checking. Maybe future patches want to investigate checking some of
the other functions. (In principle, we might eventually want to check
all of them, but since this is test and build support code, not
production code, there are probably some reasonable compromises to be
made.)
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/88b7d4f2-46d9-4cc7-b1f7-613c90f9a76a%40eisentraut.org
Based on comments from Peter Eisentraut.
* Document CREATE DATABASE ... BUILTIN_LOCALE.
* Determine required encoding based on locale name for CREATE
COLLATION. Use -1 for "C" (requires catversion bump).
* initdb output fixups.
* Make ctype_is_c a constant true for now.
* Fixups to ICU 010_create_database.pl test.
Discussion: https://postgr.es/m/4135cf11-206d-40ed-96c0-9363c1232379@eisentraut.org
Allocate memory for the error message inside memory owned by the
JsonLexContext and move responsibility away from the caller for
freeing it. This means that we can partially revert b44669b2ca
as this is now safe to use in FRONTEND code. The motivation for
this comes from the OAuth and incremental JSON patchsets but it
also adds value on its own.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAOYmi+mWdTd6ujtyF7MsvXvk7ToLRVG_tYAcaGbQLvf=N4KrQw@mail.gmail.com
This allows a RETURNING clause to be appended to a MERGE query, to
return values based on each row inserted, updated, or deleted. As with
plain INSERT, UPDATE, and DELETE commands, the returned values are
based on the new contents of the target table for INSERT and UPDATE
actions, and on its old contents for DELETE actions. Values from the
source relation may also be returned.
As with INSERT/UPDATE/DELETE, the output of MERGE ... RETURNING may be
used as the source relation for other operations such as WITH queries
and COPY commands.
Additionally, a special function merge_action() is provided, which
returns 'INSERT', 'UPDATE', or 'DELETE', depending on the action
executed for each row. The merge_action() function can be used
anywhere in the RETURNING list, including in arbitrary expressions and
subqueries, but it is an error to use it anywhere outside of a MERGE
query's RETURNING list.
Dean Rasheed, reviewed by Isaac Morland, Vik Fearing, Alvaro Herrera,
Gurjeet Singh, Jian He, Jeff Davis, Merlin Moncure, Peter Eisentraut,
and Wolfgang Walther.
Discussion: http://postgr.es/m/CAEZATCWePEGQR5LBn-vD6SfeLZafzEm2Qy_L_Oky2=qw2w3Pzg@mail.gmail.com
psql has the :{?name} syntax for testing for a psql variable existence. This
commit implements a tab completion for this syntax. Notably, in order to
implement this we have to remove '{' from WORD_BREAKS. It appears that
'{' here from the very beginning and it comes from the default value of
rl_basic_word_break_characters. And :{?name} is the only psql syntax using
the '{' sign. So, removing it from WORD_BREAKS shouldn't break anything.
Discussion: https://postgr.es/m/CAGRrpzZU48F2oV3d8eDLr%3D4TU9xFH5Jt9ED%2BqU1%2BX91gMH68Sw%40mail.gmail.com
Author: Steve Chavez
Reviewed-by: Erik Wienhold
New provider for collations, like "libc" or "icu", but without any
external dependency.
Initially, the only locale supported by the builtin provider is "C",
which is identical to the libc provider's "C" locale. The libc
provider's "C" locale has always been treated as a special case that
uses an internal implementation, without using libc at all -- so the
new builtin provider uses the same implementation.
The builtin provider's locale is independent of the server environment
variables LC_COLLATE and LC_CTYPE. Using the builtin provider, the
database collation locale can be "C" while LC_COLLATE and LC_CTYPE are
set to "en_US", which is impossible with the libc provider.
By offering a new builtin provider, it clarifies that the semantics of
a collation using this provider will never depend on libc, and makes
it easier to document the behavior.
Discussion: https://postgr.es/m/ab925f69-5f9d-f85e-b87c-bd2a44798659@joeconway.com
Discussion: https://postgr.es/m/dd9261f4-7a98-4565-93ec-336c1c110d90@manitou-mail.org
Discussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com
Reviewed-by: Daniel Vérité, Peter Eisentraut, Jeremy Schneider
Roles with MAINTAIN on a relation may run VACUUM, ANALYZE, REINDEX,
REFRESH MATERIALIZE VIEW, CLUSTER, and LOCK TABLE on the relation.
Roles with privileges of pg_maintain may run those same commands on
all relations.
This was previously committed for v16, but it was reverted in
commit 151c22deee due to concerns about search_path tricks that
could be used to escalate privileges to the table owner. Commits
2af07e2f74, 59825d1639, and c7ea3f4229 resolved these concerns by
restricting search_path when running maintenance commands.
Bumps catversion.
Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/20240305161235.GA3478007%40nathanxps13
Before this patch, if you took a full backup on server A and then
tried to use the backup manifest to take an incremental backup on
server B, it wouldn't know that the manifest was from a different
server and so the incremental backup operation could potentially
complete without error. When you later tried to run pg_combinebackup,
you'd find out that your incremental backup was and always had been
invalid. That's poor timing, because nobody likes finding out about
backup problems only at restore time.
With this patch, you'll get an error when trying to take the (invalid)
incremental backup, which seems a lot nicer.
Amul Sul, revised by me. Review by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmoYLZzbSAMM3cAjV4Y+iCRZn-bR9H2+Mdz7NdaJFU1Zb5w@mail.gmail.com
This works just like get_controlfile(), but expects the path to the
control file rather than the path to the data directory that contains
the control file. This makes more sense in cases where the caller
has already constructed the path to the control file itself.
Amul Sul and Robert Haas, reviewed by Michael Paquier
Commit f696c0cd5f tried to account for the version in a way that
includes development versions, but it was broken. Fix with suggestion
from Tom Lane.
Discussion: https://postgr.es/m/1553991.1710191312@sss.pgh.pa.us
Reported-by: Tom Lane
Most callers of strerror() are removed from the backend code. The
remaining callers require special handling with a saved errno from a
previous system call. The frontend code still needs strerror() where
error states need to be handled outside of fprintf.
Note that pg_regress is not changed to use %m as the TAP output may
clobber errno, since those functions call fprintf() and friends before
evaluating the format string.
Support for %m in src/port/snprintf.c has been added in d6c55de1f99a,
hence all the stable branches currently supported include it.
Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/87sf13jhuw.fsf@wibble.ilmari.org
Commit f696c0cd5f caused a test failure in 002_pg_upgrade.pl, because
an earlier s/// operator caused qr// to no longer match the empty
string. Use qr/^$/ instead, which is a better test anyway, because we
expect the stderr to be empty.
Initially this appeared to be a perl bug, but per discussion, it seems
that it was a misunderstanding of how perl works: an empty pattern
uses the last successful pattern. Given how surprising that behavior
is to perl non-experts, we will need to look for similar problems
elsewhere and eliminate the use of empty patterns throughout the
code. For now, address this one instance to fix the buildfarm.
Discussion: https://postgr.es/m/0ef325fa06e7a1605c4e119c4ecb637c67e5fb4e.camel@j-davis.com
Reviewed-by: Tom Lane
Presently, reindexdb's --table, --schema, --index, and --system
options cannot be used together with --all, i.e., you cannot
specify objects to process in all databases. This commit removes
this unnecessary restriction. Furthermore, it removes the
restriction that --system cannot be used with --table, --schema,
and --index. There is no such restriction for the latter options,
and there is no technical reason to disallow these combinations.
Reviewed-by: Kyotaro Horiguchi, Dean Rasheed
Discussion: https://postgr.es/m/20230628232402.GA1954626%40nathanxps13