Commit Graph

6734 Commits

Author SHA1 Message Date
3f7836ff65 Fix calculation of which GENERATED columns need to be updated.
We were identifying the updatable generated columns of inheritance
children by transposing the calculation made for their parent.
However, there's nothing that says a traditional-inheritance child
can't have generated columns that aren't there in its parent, or that
have different dependencies than are in the parent's expression.
(At present it seems that we don't enforce that for partitioning
either, which is likely wrong to some degree or other; but the case
clearly needs to be handled with traditional inheritance.)

Hence, drop the very-klugy-anyway "extraUpdatedCols" RTE field
in favor of identifying which generated columns depend on updated
columns during executor startup.  In HEAD we can remove
extraUpdatedCols altogether; in back branches, it's still there but
always empty.  Another difference between the HEAD and back-branch
versions of this patch is that in HEAD we can add the new bitmap field
to ResultRelInfo, but that would cause an ABI break in back branches.
Like 4b3e37993, add a List field at the end of struct EState instead.

Back-patch to v13.  The bogus calculation is also being made in v12,
but it doesn't have the same visible effect because we don't use it
to decide which generated columns to recalculate; as a consequence of
which the patch doesn't apply easily.  I think that there might still
be a demonstrable bug associated with trigger firing conditions, but
that's such a weird corner-case usage that I'm content to leave it
unfixed in v12.

Amit Langote and Tom Lane

Discussion: https://postgr.es/m/CA+HiwqFshLKNvQUd1DgwJ-7tsTp=dwv7KZqXC4j2wYBV1aCDUA@mail.gmail.com
Discussion: https://postgr.es/m/2793383.1672944799@sss.pgh.pa.us
2023-01-05 14:12:17 -05:00
529da086ba Remove extra regress check arguments from test_pg_db_role_setting
They were accidentally copied from test_oat_hooks.

Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20230102154240.GL1153%40telsasoft.com
Reviewed-by: Pavel Borisov
2023-01-05 13:11:40 +03:00
afdd9f7f0e meson: Add 'running' test setup, as a replacement for installcheck
Do the same as 3f0e786ccbf5 for test_pg_db_role_setting.

Discussion: https://postgr.es/m/20221227065456.GU1153@telsasoft.com
Author: Pavel Borisov
Reviewed-by: Justin Pryzby, Tom Lane
2023-01-05 13:11:28 +03:00
bf03cfd162 Windows support in pg_import_system_collations
Windows can enumerate the locales that are either installed or
supported by calling EnumSystemLocalesEx(), similar to what is already
done in the READ_LOCALE_A_OUTPUT switch.  We can refactor some of the
logic already used in that switch into a new function
create_collation_from_locale().

The enumerated locales have BCP 47 shape, that is with a hyphen
between language and territory, instead of POSIX's underscore.  The
created collations will retain the BCP 47 shape, but we will also
create a POSIX alias, so xx-YY will have an xx_YY alias.

A new test collate.windows.win1252 is added that is like
collate.linux.utf8.

Author: Juan Jose Santamaria Flecha <juanjo.santamaria@gmail.com>
Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/0050ec23-34d9-2765-9015-98c04f0e18ac@postgrespro.ru
2023-01-03 14:21:56 +01:00
33ab0a2a52 Fix typos in comments, code and documentation
While on it, newlines are removed from the end of two elog() strings.
The others are simple grammar mistakes.  One comment in pg_upgrade
referred incorrectly to sequences since a7e5457.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com
Backpatch-through: 11
2023-01-03 16:26:14 +09:00
c8e1ba736b Update copyright for 2023
Backpatch-through: 11
2023-01-02 15:00:37 -05:00
2ceea5adb0 Accept "+infinity" in date and timestamp[tz] input.
The float and numeric types accept this variant spelling of
"infinity", so it seems like the datetime types should too.

Vik Fearing, some cosmetic mods by me

Discussion: https://postgr.es/m/d0bef637-2dbd-0a5d-e539-48243b6f6c5e@postgresfriends.org
2023-01-01 14:16:07 -05:00
d747dc85ae In plpgsql, don't preassign portal names to bound cursor variables.
A refcursor variable that is bound to a specific query (by declaring
it with "CURSOR FOR") now chooses a portal name in the same way as an
unbound, plain refcursor variable.  Its string value starts out as
NULL, and unless that's overridden by manual assignment, it will be
replaced by a unique-within-session portal name during OPEN.

The previous behavior was to initialize such variables to contain
their own name, resulting in that also being the portal name unless
the user overwrote it before OPEN.  The trouble with this is that
it causes failures due to conflicting portal names if the same
cursor variable name is used in different functions.  It is pretty
non-orthogonal to have bound and unbound refcursor variables behave
differently on this point, too, so let's change it.

This change can cause compatibility problems for applications that
open a bound cursor in a plpgsql function and then use it in the
calling code without explicitly passing back the refcursor value
(portal name).  If the calling code simply assumes that the portal
name matches the called function's variable name, it will now fail.
That can be fixed by explicitly assigning a string value to the
refcursor variable before OPEN, e.g.

    DECLARE myc CURSOR FOR SELECT ...;
    BEGIN
      myc := 'myc';  -- add this
      OPEN myc;

We have no documentation examples showing the troublesome usage
pattern, so we can hope it's rare in practice.

Patch by me; thanks to Pavel Stehule and Jan Wieck for review.

Discussion: https://postgr.es/m/1465101.1667345983@sss.pgh.pa.us
2023-01-01 13:22:34 -05:00
14d63dd252 ci: Change macOS builds from Intel to ARM.
Cirrus is about to shut down its macOS-on-Intel support, so it's time to
move our CI testing over to ARM instances.  The Homebrew package manager
changed its default installation prefix for the new architecture, so a
couple of tests need tweaks to find binaries.

Back-patch to 15, where in-tree CI began.

Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20221122225744.GF11463%40telsasoft.com
2023-01-01 10:45:18 +13:00
7aa81c61ec Fix precision handling for some COERCE_SQL_SYNTAX functions
f193883 has been incorrectly setting up the precision used in the
timestamp compilations returned by the following functions:
- LOCALTIME
- LOCALTIMESTAMP
- CURRENT_TIME
- CURRENT_TIMESTAMP

Specifying an out-of-range precision for CURRENT_TIMESTAMP and
LOCALTIMESTAMP was raising a WARNING without adjusting the precision,
leading to a subsequent error.  LOCALTIME and CURRENT_TIME raised a
WARNING without an error, still the precision given to the internal
routines was not correct, so let's be clean.

Ian has reported the problems in timestamp.c, while I have noticed the
ones in date.c.  Regression tests are added for all of them with
precisions high enough to provide coverage for the warnings, something
that went missing up to this commit.

Author: Ian Lawrence Barwick, Michael Paquier
Discussion: https://postgr.es/m/CAB8KJ=jQEnn9sYG+N752spt68wMrhmT-ocHCh4oeNmHF82QMWA@mail.gmail.com
2022-12-30 20:47:57 +09:00
a5434c5258 Remove new locale dependency in regproc regression test.
The modified error message for regcollationin failure includes
the database encoding, which it should've occurred to me is a
portability hazard for the regression tests.  Adjust the test
so the expected output doesn't include that.

In passing, fix a comment typo introduced in b8c0ffbd2.

Per buildfarm.
2022-12-27 13:06:42 -05:00
858e776c84 Convert the reg* input functions to report (most) errors softly.
This is not really complete, but it catches most cases of practical
interest.  The main omissions are:

* regtype, regprocedure, and regoperator parse type names by
calling the main grammar, so any grammar-detected syntax error
will still be a hard error.  Also, if one includes a type
modifier in such a type specification, errors detected by the
typmodin function will be hard errors.

* Lookup errors are handled just by passing missing_ok = true
to the relevant catalog lookup function.  Because we've used
quite a restrictive definition of "missing_ok", this means that
edge cases such as "the named schema exists, but you lack
USAGE permission on it" are still hard errors.

It would make sense to me to replace most/all missing_ok
parameters with an escontext parameter and then allow these
additional lookup failure cases to be trapped too.  But that's
a job for some other day.

Discussion: https://postgr.es/m/3342239.1671988406@sss.pgh.pa.us
2022-12-27 12:26:01 -05:00
78212f2101 Convert tsqueryin and tsvectorin to report errors softly.
This is slightly tedious because the adjustments cascade through
a couple of levels of subroutines, but it's not very hard.
I chose to avoid changing function signatures more than absolutely
necessary, by passing the escontext pointer in existing structs
where possible.

tsquery's nuisance NOTICEs about empty queries are suppressed in
soft-error mode, since they're not errors and we surely don't want
them to be shown to the user anyway.  Maybe that whole behavior
should be reconsidered.

Discussion: https://postgr.es/m/3824377.1672076822@sss.pgh.pa.us
2022-12-27 12:00:31 -05:00
eb8312a22a Detect bad input for types xid, xid8, and cid.
Historically these input functions just called strtoul or strtoull
and returned the result, with no error detection whatever.  Upgrade
them to reject garbage input and out-of-range values, similarly to
our other numeric input routines.

To share the code for this with type oid, adjust the existing
"oidin_subr" to be agnostic about the SQL name of the type it is
handling, and move it to numutils.c; then clone it for 64-bit types.

Because the xid types previously accepted hex and octal input by
reason of calling strtoul[l] with third argument zero, I made the
common subroutine do that too, with the consequence that type oid
now also accepts hex and octal input.  In view of 6fcda9aba, that
seems like a good thing.

While at it, simplify the existing over-complicated handling of
syntax errors from strtoul: we only need one ereturn not three.

Discussion: https://postgr.es/m/3526121.1672000729@sss.pgh.pa.us
2022-12-27 11:40:01 -05:00
5de94a041e Add 'logical_decoding_mode' GUC.
This enables streaming or serializing changes immediately in logical
decoding. This parameter is intended to be used to test logical decoding
and replication of large transactions for which otherwise we need to
generate the changes till logical_decoding_work_mem is reached.

This helps in reducing the timing of existing tests related to logical
replication of in-progress transactions and will help in writing tests for
for the upcoming feature for parallelly applying large in-progress
transactions.

Author: Shi yu
Reviewed-by: Sawada Masahiko, Shveta Mallik, Amit Kapila, Dilip Kumar, Kuroda Hayato, Kyotaro Horiguchi
Discussion: https://postgr.es/m/OSZPR01MB63104E7449DBE41932DB19F1FD1B9@OSZPR01MB6310.jpnprd01.prod.outlook.com
2022-12-26 08:58:16 +05:30
442e25d248 Convert enum_in() to report errors softly.
I missed this in my initial survey, probably because I examined
the contents of pg_type in the postgres database, which lacks
any enumerated types.

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-25 14:32:30 -05:00
e37fe1db6e Convert jsonpath's input function to report errors softly
Reviewed by Tom Lane

Discussion: https://postgr.es/m/a8dc5700-c341-3ba8-0507-cc09881e6200@dunslane.net
2022-12-24 15:21:20 -05:00
b5aff92557 Fix recent accidental omission in pg_proc.dat
ed1a88dda added support functions for the ntile(), percent_rank() and
cume_dist() window functions but neglected to actually add these support
functions to the pg_proc entry for the corresponding window function.

Also, take this opportunity to add these window functions to one of the
regression tests added in ed1a88dda to give the support functions a little
bit of exercise.  If I'd done that in the first place then the omission
would have been more obvious.

Bump the catversion, again.
2022-12-24 13:18:35 +13:00
6602599ce2 Fix end LSN determination in recently added test
The test added in commit e44dae07f931 has a thinko: it wants to read
info about a few WAL records, but it obtains the LSN of the final record
to read by asking for the WAL insert position; however,
pg_get_wal_records_info only accepts to read up to the flush position
(cf. IsFutureLSN()).  In normal conditions there is no difference, since
the last record written by the preceding loop is known flushed and it's
the one the test wants; but it's possible to have some other process
insert another WAL record that isn't flushed, and that causes the whole
test to explode.

Fix by having pg_get_wal_records_info() read only up to the flushed
position.  Backpatch to 15, which is where pg_walinspect appeared.

Author: Karina Litskevich <litskevichkarina@gmail.com>
Discussion: https://postgr.es/m/a5559c95-52c3-5eea-cd63-9b4f1c70ff96@gmail.com
2022-12-23 17:27:05 +01:00
13e0d7a603 Rename pg_dissect_walfile_name() to pg_split_walfile_name()
The former name was discussed as being confusing, so use "split", as per
a suggestion from Magnus Hagander.

While on it, one of the output arguments is renamed from "segno" to
"segment_number", as per a suggestion from Kyotaro Horiguchi.

The documentation is updated to reflect all these changes.

Bump catalog version.

Author: Bharath Rupireddy, Michael Paquier
Discussion: https://postgr.es/m/CABUevEytQVaOOhGdoh0D7hGwe3fuKcRF6NthsSW7ww04EmtFgQ@mail.gmail.com
2022-12-23 09:15:01 +09:00
ed1a88ddac Allow window functions to adjust their frameOptions
WindowFuncs such as row_number() don't care if it's called with ROWS
UNBOUNDED PRECEDING AND CURRENT ROW or with RANGE UNBOUNDED PRECEDING AND
CURRENT ROW.  The latter is less efficient as the RANGE option requires
that the executor check for peer rows, so using the ROW option instead
would cause less overhead.  Because RANGE is part of the default frame
options for WindowClauses, it means WindowAgg is, by default, working much
harder than it needs to for window functions where the ROWS / RANGE option
has no effect on the window function's result.

On a test query from the discussion thread, a performance improvement of
344% was seen by using ROWS instead of RANGE.

Here we add a new support function node type to allow support functions to
be called for window functions so that the most optimal version of the
frame options can be set.  The planner has been adjusted so that the frame
options are changed only if all window functions sharing the same window
clause agree on what the optimized frame options are.

Here we give the ability for row_number(), rank(), dense_rank(),
percent_rank(), cume_dist() and ntile() to alter their WindowClause's
frameOptions.

Reviewed-by: Vik Fearing, Erwin Brandstetter, Zhihong Yu
Discussion: https://postgr.es/m/CAGHENJ7LBBszxS+SkWWFVnBmOT2oVsBhDMB1DFrgerCeYa_DyA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvohAKEtTXxq7Pc-ic2dKT8oZfbRKeEJP64M0B6+S88z+A@mail.gmail.com
2022-12-23 12:43:52 +13:00
f03bd5717e Use existing SSL certs in LDAP tests instead of generating them
The SSL test suite has a bunch of pre-existing certificates, so it's
better simply to use what we already have than generate new certificates
each time the LDAP tests are run.

Discussion: https://postgr.es/m/bc305c7a-f390-44f2-2e82-9bcaec6108da@dunslane.net
2022-12-20 10:02:49 -05:00
8284cf5f74 Add copyright notices to meson files
Discussion: https://postgr.es/m/222b43a5-2fb3-2c1b-9cd0-375d376c8246@dunslane.net
2022-12-20 07:54:39 -05:00
3226f47282 Add enable_presorted_aggregate GUC
1349d279 added query planner support to allow more efficient execution of
aggregate functions which have an ORDER BY or a DISTINCT clause.  Prior to
that commit, the planner would only request that the lower planner produce
a plan with the order required for the GROUP BY clause and it would be
left up to nodeAgg.c to perform the final sort of records within each
group so that the aggregate transition functions were called in the
correct order.  Now that the planner requests the lower planner produce a
plan with the GROUP BY and the ORDER BY / DISTINCT aggregates in mind,
there is the possibility that the planner chooses a plan which could be
less efficient than what would have been produced before 1349d279.

While developing 1349d279, I had in mind that Incremental Sort would help
us in cases where an index exists only on the GROUP BY column(s).
Incremental Sort would just replace the implicit tuplesorts which are
being performed in nodeAgg.c.  However, because the planner has the
flexibility to instead choose a plan which just performs a full sort on
both the GROUP BY and ORDER BY / DISTINCT aggregate columns, there is
potential for the planner to make a bad choice.  The costing for
Incremental Sort is not perfect as it assumes an even distribution of rows
to sort within each sort group.

Here we add an escape hatch in the form of the enable_presorted_aggregate
GUC.  This will allow users to get the pre-PG16 behavior in cases where
they have no other means to convince the query planner to produce a plan
which only sorts on the GROUP BY column(s).

Discussion: https://postgr.es/m/CAApHDvr1Sm+g9hbv4REOVuvQKeDWXcKUAhmbK5K+dfun0s9CvA@mail.gmail.com
2022-12-20 22:28:58 +13:00
cca1863489 Add pg_dissect_walfile_name()
This function takes in input a WAL segment name and returns a tuple made
of the segment sequence number (dependent on the WAL segment size of the
cluster) and its timeline, as of a thin SQL wrapper around the existing
XLogFromFileName().

This function has multiple usages, like being able to compile a LSN from
a file name and an offset, or finding the timeline of a segment without
having to do to some maths based on the first eight characters of the
segment.

Bump catalog version.

Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Kyotaro Horiguchi, Maxim Orlov, Michael
Paquier
Discussion: https://postgr.es/m/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com
2022-12-20 13:36:27 +09:00
7122f9d543 Fix bit-rotted planner test case.
While fooling with my pet outer-join-variables patch, I discovered
that the test case I added in commit 11086f2f2 no longer demonstrates
what it's supposed to.  The idea is to tempt the planner to reverse
the order of the two outer joins, which would leave noplace to
correctly evaluate the WHERE clause that's inserted between them.
Before the addition of the delay_upper_joins mechanism, it would
have taken the bait.

However, subsequent improvements broke the test in two different ways.
First, we now recognize the IS NULL coding pattern as an antijoin, and
we won't re-order antijoins; even if we did, the IS NULL test clauses
get removed so there would be no opportunity for them to misbehave.
Second, the planner now discovers that nested parameterized indexscans
are a lot cheaper than the double hash join it used back in the day,
and that approach doesn't want to re-order the joins anyway.  Thus,
in HEAD the test passes even if one dikes out delay_upper_joins.

To fix, change the IS NULL tests to COALESCE clauses, which produce
the same results but the planner isn't smart enough to convert them
to antijoins.  It'll still go for parameterized indexscans though,
so drop the index enabling that (don't know why I added that in the
first place), and disable nestloop joining just to be sure.

This time around, add an EXPLAIN to make the choice of plan visible.
2022-12-17 18:51:24 -05:00
935277b241 Fix inability to reference CYCLE column from inside its CTE.
Such references failed with "cache lookup failed for type 0"
because we didn't resolve the type of the CYCLE column until after
analyzing the CTE's query.  We can just move that processing
to before the recursive parse_sub_analyze call, though.

While here, invent a couple of local variables to make this
code less egregiously wider-than-80-columns.

Per bug #17723 from Vik Fearing.  Back-patch to v14 where
the CYCLE feature was added.

Discussion: https://postgr.es/m/17723-2c4985ff111e7bba@postgresql.org
2022-12-16 13:07:42 -05:00
37bef842f5 Convert xml_in to report errors softly.
The key idea here is that xml_parse must distinguish hard errors
from soft errors.  We want to throw a hard error for libxml
initialization failures: those might be out-of-memory, or something
else, but in any case they are not the fault of the input string.
If we get to the point of parsing the input, and something goes
wrong, we can fairly consider that to mean bad input.

One thing that arguably does mean bad input, but I didn't trouble
to handle softly, is encoding conversion failure while converting
the server encoding to UTF8.  This might be something to improve
later, but it seems like a pretty low-probability scenario.

Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
2022-12-16 11:10:40 -05:00
4a29eabd1d Remove pessimistic cost penalization from Incremental Sort
When incremental sorts were added in v13 a 1.5x pessimism factor was added
to the cost modal.  Seemingly this was done because the cost modal only
has an estimate of the total number of input rows and the number of
presorted groups.  It assumes that the input rows will be evenly
distributed throughout the presorted groups.  The 1.5x pessimism factor
was added to slightly reduce the likelihood of incremental sorts being
used in the hope to avoid performance regressions where an incremental
sort plan was picked and turned out slower due to a large skew in the
number of rows in the presorted groups.

An additional quirk with the path generation code meant that we could
consider both a sort and an incremental sort on paths with presorted keys.
This meant that with the pessimism factor, it was possible that we opted
to perform a sort rather than an incremental sort when the given path had
presorted keys.

Here we remove the 1.5x pessimism factor to allow incremental sorts to
have a fairer chance at being chosen against a full sort.

Previously we would generally create a sort path on the cheapest input
path (if that wasn't sorted already) and incremental sort paths on any
path which had presorted keys.  This meant that if the cheapest input path
wasn't completely sorted but happened to have presorted keys, we would
create a full sort path *and* an incremental sort path on that input path.
Here we change this logic so that if there are presorted keys, we only
create an incremental sort path, and create sort paths only when a full
sort is required.

Both the removal of the cost pessimism factor and the changes made to the
path generation make it more likely that incremental sorts will now be
chosen.  That, of course, as with teaching the planner any new tricks,
means an increased likelihood that the planner will perform an incremental
sort when it's not the best method.  Our standard escape hatch for these
cases is an enable_* GUC.  enable_incremental_sort already exists for
this.

This came out of a report by Pavel Luzanov where he mentioned that the
master branch was choosing to perform a Seq Scan -> Sort -> Group
Aggregate for his query with an ORDER BY aggregate function.  The v15 plan
for his query performed an Index Scan -> Group Aggregate, of course, the
aggregate performed the final sort internally in nodeAgg.c for the
aggregate's ORDER BY.  The ideal plan would have been to use the index,
which provided partially sorted input then use an incremental sort to
provide the aggregate with the sorted input.  This was not being chosen
due to the pessimism in the incremental sort cost modal, so here we remove
that and rationalize the path generation so that sort and incremental sort
plans don't have to needlessly compete.  We assume that it's senseless
to ever use a full sort on a given input path where an incremental sort
can be performed.

Reported-by: Pavel Luzanov
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/9f61ddbf-2989-1536-b31e-6459370a6baa%40postgrespro.ru
2022-12-16 15:22:23 +13:00
8b6b043cee Re-adjust drop-index-concurrently-1 isolation test
It seems that drop-index-concurrently-1 has started to forget what it was
originally meant to be testing.  d2d8a229b, which added incremental sorts
changed the expected plan to be an Index Scan plan instead of a Seq Scan
plan.  This occurred as the primary key index of the table in question
provided presorted input and, because that index happened to be the
cheapest input path due to enable_seqscan being disabled, the incremental
sort changes just added a Sort on top of that.  It seems based on the name
of the PREPAREd statement that the intention here is that the query
produces a seqscan plan.

The reason this test has become broken seems to be due to how the test was
originally coded.  The test was trying to force a seqscan plan by
performing some casting to make it so the test_dc index couldn't be used
to perform the required filtering.  Trying to coax the planner into using
a plan which has costed in a disable_cost seems like it's always going to
be flakey as small changes in costs are drowned out by the large
disable_cost combined with add_path's STD_FUZZ_FACTOR.  Here we get rid of
the casts that we're using to try to trick the planner into a seqscan and
instead toggle enable_seqscan as and when required to get the desired
plan.

Additionally, rename a few things in the test and add some additional
wording to the comments to try and make it more clear in the future what
we expect this test to be doing.

Discussion: https://postgr.es/m/CAApHDvrbDhObhLV+=U_K_-t+2Av2av1aL9d+2j_3AO-XndaviA@mail.gmail.com
Backpatch-through: 13, where d2d8a229b changed the expected test output
2022-12-16 11:39:40 +13:00
d35a1af468 Convert range_in and multirange_in to report errors softly.
This is mostly straightforward, except that if the range type
has a canonical function, that might throw an error during range
input.  (Such errors probably only occur for edge cases: in the
in-core canonical functions, it happens only if a bound has the
maximum valid value for the underlying type.)  Hence, this patch
extends the soft-error regime to allow canonical functions to
return errors softly as well.  Extensions implementing range
canonical functions will need modification anyway because of the
API change for range_serialize(); while at it, they might want
to do something similar to what's been done here in the in-core
canonical functions.

Discussion: https://postgr.es/m/3284599.1671075185@sss.pgh.pa.us
2022-12-15 12:18:36 -05:00
2613dec4ed Move provariadic sanity check to a more appropriate place
35f059e9bdfb3b14ac9d22a9e159d36ec0ccf804 put the provariadic sanity
check into type_sanity.sql, even though it's not about types, and
moreover in the middle of some connected test group, which makes it
all very confusing.  Move it to opr_sanity.sql, where it is in better
company.
2022-12-15 07:54:48 +01:00
3b9d2deb67 Convert a few more datatype input functions to report errors softly.
Convert the remaining string-category input functions
(bpcharin, varcharin, byteain) to the new style.

Discussion: https://postgr.es/m/3038346.1671060258@sss.pgh.pa.us
2022-12-14 19:42:05 -05:00
90161dad4d Convert a few more datatype input functions to report errors softly.
Convert cash_in and uuid_in to the new style.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 18:03:11 -05:00
47f3f97fcd Convert a few more datatype input functions to report errors softly.
Convert assorted internal-ish datatypes, namely aclitemin,
int2vectorin, oidin, oidvectorin, pg_lsn_in, pg_snapshot_in,
and tidin to the new style.

(Some others you might expect to find in this group, such as
cidin and xidin, need no changes because they never throw
errors at all.  That seems a little cheesy ... but it is not in
the charter of this patch series to add new error conditions.)

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 17:50:24 -05:00
332741e739 Convert the geometric input functions to report errors softly.
Convert box_in, circle_in, line_in, lseg_in, path_in, point_in,
and poly_in to the new style.

line_in still throws hard errors for overflows/underflows that can occur
when the input is specified as two points rather than in the canonical
"Ax + By + C = 0" style.  I'm not too concerned about that: it won't be
reached in normal dump/restore cases, and it's fairly debatable that
such conversion should ever have been made part of a type input function
in the first place.  But in any case, I don't want to extend the soft
error conventions into float.h without more discussion than this patch
has had.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 16:10:20 -05:00
17407a8eaa Convert a few more datatype input functions to report errors softly.
Convert bit_in, varbit_in, inet_in, cidr_in, macaddr_in, and
macaddr8_in to the new style.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 13:22:08 -05:00
6fcda9aba8 Non-decimal integer literals
Add support for hexadecimal, octal, and binary integer literals:

    0x42F
    0o273
    0b100101

per SQL:202x draft.

This adds support in the lexer as well as in the integer type input
functions.

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
2022-12-14 06:17:07 +01:00
60684dd834 Add grantable MAINTAIN privilege and pg_maintain role.
Allows VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZED VIEW, CLUSTER,
and LOCK TABLE.

Effectively reverts 4441fc704d. Instead of creating separate
privileges for VACUUM, ANALYZE, and other maintenance commands, group
them together under a single MAINTAIN privilege.

Author: Nathan Bossart
Discussion: https://postgr.es/m/20221212210136.GA449764@nathanxps13
Discussion: https://postgr.es/m/45224.1670476523@sss.pgh.pa.us
2022-12-13 17:33:28 -08:00
b0feda79fd Fix jsonb subscripting to cope with toasted subscript values.
jsonb_get_element() was incautious enough to use VARDATA() and
VARSIZE() directly on an arbitrary text Datum.  That of course
fails if the Datum is short-header, compressed, or out-of-line.
The typical result would be failing to match any element of a
jsonb object, though matching the wrong one seems possible as well.

setPathObject() was slightly brighter, in that it used VARDATA_ANY
and VARSIZE_ANY_EXHDR, but that only kept it out of trouble for
short-header Datums.  push_path() had the same issue.  This could
result in faulty subscripted insertions, though keys long enough to
cause a problem are likely rare in the wild.

Having seen these, I looked around for unsafe usages in the rest
of the adt/json* files.  There are a couple of places where it's not
immediately obvious that the Datum can't be compressed or out-of-line,
so I added pg_detoast_datum_packed() to cope if it is.  Also, remove
some other usages of VARDATA/VARSIZE on Datums we just extracted from
a text array.  Those aren't actively broken, but they will become so
if we ever start allowing short-header array elements, which does not
seem like a terribly unreasonable thing to do.  In any case they are
not great coding examples, and they could also do with comments
pointing out that we're assuming we don't need pg_detoast_datum_packed.

Per report from exe-dealer@yandex.ru.  Patch by me, but thanks to
David Johnston for initial investigation.  Back-patch to v14 where
jsonb subscripting was introduced.

Discussion: https://postgr.es/m/205321670615953@mail.yandex.ru
2022-12-12 16:17:54 -05:00
df8b8968d4 Order getopt arguments
Order the letters in the arguments of getopt() and getopt_long(), as
well as in the subsequent switch statements.  In most cases, I used
alphabetical with lower case first.  In a few cases, existing
different orders (e.g., upper case first) was kept to reduce the diff
size.

Discussion: https://www.postgresql.org/message-id/flat/3efd0fe8-351b-f836-9122-886002602357%40enterprisedb.com
2022-12-12 15:20:00 +01:00
840ff5f451 Get rid of recursion-marker values in enum AlterTableType
During ALTER TABLE execution, when prep-time handling of subcommands of
certain types determine that execution-time handling requires recursion,
they signal this by changing the subcommand type to a special value.
This can be done in a simpler way by using a separate flag introduced by
commit ec0925c22a3d, so do that.

Catversion bumped.  It's not clear to me that ALTER TABLE subcommands
are stored anywhere in catalogs (CREATE FUNCTION rejects it in BEGIN
ATOMIC function bodies), but we do have both write and read support for
them, so be safe.

Discussion: https://postgr.es/m/20220929090033.zxuaezcdwh2fgfjb@alvherre.pgsql
2022-12-12 11:13:26 +01:00
b8c0ffbd2c Convert domain_in to report errors softly.
This is straightforward as far as it goes.  However, it does not
attempt to trap errors occurring during the execution of domain
CHECK constraints.  Since those are general user-defined
expressions, the only way to do that would involve starting up a
subtransaction for each check.  Of course the entire point of
the soft-errors feature is to not need subtransactions, so that
would be self-defeating.  For now, we'll rely on the assumption
that domain checks are written to avoid throwing errors.

Discussion: https://postgr.es/m/1181028.1670635727@sss.pgh.pa.us
2022-12-11 12:56:54 -05:00
c60c9badba Convert json_in and jsonb_in to report errors softly.
This requires a bit of further infrastructure-extension to allow
trapping errors reported by numeric_in and pg_unicode_to_server,
but otherwise it's pretty straightforward.

In the case of jsonb_in, we are only capturing errors reported
during the initial "parse" phase.  The value-construction phase
(JsonbValueToJsonb) can also throw errors if assorted implementation
limits are exceeded.  We should improve that, but it seems like a
separable project.

Andrew Dunstan and Tom Lane

Discussion: https://postgr.es/m/3bac9841-fe07-713d-fa42-606c225567d6@dunslane.net
2022-12-11 11:28:15 -05:00
c60488b474 Convert datetime input functions to use "soft" error reporting.
This patch converts the input functions for date, time, timetz,
timestamp, timestamptz, and interval to the new soft-error style.
There's some related stuff in formatting.c that remains to be
cleaned up, but that seems like a separable project.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 16:07:49 -05:00
ccff2d20ed Convert a few datatype input functions to use "soft" error reporting.
This patch converts the input functions for bool, int2, int4, int8,
float4, float8, numeric, and contrib/cube to the new soft-error style.
array_in and record_in are also converted.  There's lots more to do,
but this is enough to provide proof-of-concept that the soft-error
API is usable, as well as reference examples for how to convert
input functions.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 10:14:53 -05:00
1939d26282 Add test scaffolding for soft error reporting from input functions.
pg_input_is_valid() returns boolean, while pg_input_error_message()
returns the primary error message if the input is bad, or NULL
if the input is OK.  The main reason for having two functions is
so that we can test both the details-wanted and the no-details-wanted
code paths.

Although these are primarily designed with testing in mind,
it could well be that they'll be useful to end users as well.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 10:08:44 -05:00
beecbe8e50 Fix invalid role names introduced in 096dd80f3c
096dd80f3c added new regression tests dealing with roles.  By oversight, role
names didn't start with regress_ prefix.  This commit fixes that.
2022-12-09 13:53:32 +03:00
096dd80f3c Add USER SET parameter values for pg_db_role_setting
The USER SET flag specifies that the variable should be set on behalf of an
ordinary role.  That lets ordinary roles set placeholder variables, which
permission requirements are not known yet.  Such a value wouldn't be used if
the variable finally appear to require superuser privileges.

The new flags are stored in the pg_db_role_setting.setuser array.  Catversion
is bumped.

This commit is inspired by the previous work by Steve Chavez.

Discussion: https://postgr.es/m/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ%40mail.gmail.com
Author: Alexander Korotkov, Steve Chavez
Reviewed-by: Pavel Borisov, Steve Chavez
2022-12-09 13:12:20 +03:00
5defdef8aa Update MERGE docs to mention that ONLY is supported.
Commit 7103ebb7aa added support for MERGE, which included support for
inheritance hierarchies, but didn't document the fact that ONLY could
be specified before the source and/or target tables to exclude tables
inheriting from the tables specified.

Update merge.sgml to mention this, and while at it, add some
regression tests to cover it.

Dean Rasheed, reviewed by Nathan Bossart.

Backpatch to 15, where MERGE was added.

Discussion: https://postgr.es/m/CAEZATCU0XM-bJCvpJuVRU3UYNRqEBS6g4-zH%3Dj9Ye0caX8F6uQ%40mail.gmail.com
2022-12-09 10:00:01 +00:00