Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:
* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
than the expected column 33.
On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list. This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.
There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses. I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
Table partitioning, introduced in commit f0e44751d7, added a new
relkind - RELKIND_PARTITIONED_TABLE. Update a couple of places in
PL/pgSQL to handle it. Specifically plpgsql_parse_cwordtype() and
build_row_from_class() needed updating in order to make table%ROWTYPE
and table.col%TYPE work for partitioned tables.
Dean Rasheed, reviewed by Amit Langote.
Discussion: https://postgr.es/m/CAEZATCUnNOKN8sLML9jUzxecALWpEXK3a3W7y0PgFR4%2Buhgc%3Dg%40mail.gmail.com
This extends the castNode() notation introduced by commit 5bcab1114 to
provide, in one step, extraction of a list cell's pointer and coercion to
a concrete node type. For example, "lfirst_node(Foo, lc)" is the same
as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode
that have appeared so far include a list extraction call, so this is
pretty widely useful, and it saves a few more keystrokes compared to the
old way.
As with the previous patch, back-patch the addition of these macros to
pg_list.h, so that the notation will be available when back-patching.
Patch by me, after an idea of Andrew Gierth's.
Discussion: https://postgr.es/m/14197.1491841216@sss.pgh.pa.us
Commit 59702716 added transition table support to PL/pgsql so that
SQL queries in trigger functions could access those transient
tables. In order to provide the same level of support for PL/perl,
PL/python and PL/tcl, refactor the relevant code into a new
function SPI_register_trigger_data. Call the new function in the
trigger handler of all four PLs, and document it as a public SPI
function so that authors of out-of-tree PLs can do the same.
Also get rid of a second QueryEnvironment object that was
maintained by PL/pgsql. That was previously used to deal with
cursors, but the same approach wasn't appropriate for PLs that are
less tangled up with core code. Instead, have SPI_cursor_open
install the connection's current QueryEnvironment, as already
happens for SPI_execute_plan.
While in the docs, remove the note that transition tables were only
supported in C and PL/pgSQL triggers, and correct some ommissions.
Thomas Munro with some work by Kevin Grittner (mostly docs)
Fix all perlcritic warnings of severity level 5, except in
src/backend/utils/Gen_dummy_probes.pl, which is automatically generated.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.
This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.
The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
out operation metadata sequentially; including the avoidance of
nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
constant re-checks at evaluation time
Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.
The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
overhead of expression evaluation, by caching state in prepared
statements. That'd be helpful in OLTPish scenarios where
initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
been made here too.
The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
initialization, whereas previously they were done during
execution. In edge cases this can lead to errors being raised that
previously wouldn't have been, e.g. a NULL array being coerced to a
different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
during expression initialization, previously it was re-built
every time a domain check was evaluated. For normal queries this
doesn't change much, but e.g. for plpgsql functions, which caches
ExprStates, the old set could stick around longer. The behavior
around might still change.
Author: Andres Freund, with significant changes by Tom Lane,
changes by Heikki Linnakangas
Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
In SQL, the ability to use parallel query was previous contingent on
fcache->readonly_func, which is only set for non-volatile functions;
but the volatility of a function has no bearing on whether queries
inside it can use parallelism. Remove that condition.
SPI_execute and SPI_execute_with_args always run the plan just once,
though not necessarily to completion. Given the changes in commit
691b8d59281b5177f16fe80858df921f77a8e955, it's sensible to pass
CURSOR_OPT_PARALLEL_OK here, so do that. This improves access to
parallelism for any caller that uses these functions to execute
queries. Such callers include plperl, plpython, pltcl, and plpgsql,
though it's not the case that they all use these functions
exclusively.
In plpgsql, allow parallel query for plain SELECT queries (as
opposed to PERFORM, which already worked) and for plain expressions
(which probably won't go through the executor at all, because they
will likely be simple expressions, but if they do then this helps).
Rafia Sabih and Robert Haas, reviewed by Dilip Kumar and Amit Kapila
Discussion: http://postgr.es/m/CAOGQiiMfJ+4SQwgG=6CVHWoisiU0+7jtXSuiyXBM3y=A=eJzmg@mail.gmail.com
Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b allowed a parallel
plan to be generated when for a RETURN QUERY or RETURN QUERY EXECUTE
statement in a PL/pgsql block, but that's a bad idea because plplgsql
asks the executor for 50 rows at a time. That means that we'll always
be running serially a plan that was intended for parallel execution,
which is not a good idea. Fix by not requesting a parallel plan from
the outset.
Per discussion, back-patch to 9.6. There is a slight risk that, due
to optimizer error, somebody could have a case where the parallel plan
executed serially is actually faster than the supposedly-best serial
plan, but the consensus seems to be that that's not sufficient
justification for leaving 9.6 unpatched.
Discussion: http://postgr.es/m/CA+TgmoZ_ZuH+auEeeWnmtorPsgc_SmP+XWbDsJ+cWvWBSjNwDQ@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
We have a project policy that every .c file should start by including
postgres.h, postgres_fe.h, or c.h as appropriate; and then there is no
need for any .h file to explicitly include any of these. (The core
reason for this policy is to make it easy to verify that pg_config_os.h
is included before any system headers such as <stdio.h>; without that,
we have portability issues on some platforms due to variation in largefile
options across different modules in the backend. Also, if .h files were
responsible for choosing which of these key headers to include, .h files
that need to be includable in either frontend or backend compiles would be
in trouble.)
plpgsql was blithely ignoring this policy, so whack it upside the head
until it complies. I also chose to standardize on including plpgsql's
own .h files after all core-system headers that it pulls in. That
could've been done either way, but this way seems saner.
Discussion: https://postgr.es/m/CAEepm=2zCoeq3QxVwhS5DFeUh=yU6z81pbWMgfOB8OzyiBwxzw@mail.gmail.com
Discussion: https://postgr.es/m/11634.1488932128@sss.pgh.pa.us
When I wrote commit ab1f0c822, I really missed the castNode() macro that
Peter E. had proposed shortly before. This back-fills the uses I would
have put it to. It's probably not all that significant, but there are
more assertions here than there were before, and conceivably they will
help catch any bugs associated with those representation changes.
I left behind a number of usages like "(Query *) copyObject(query_var)".
Those could have been converted as well, but Peter has proposed another
notational improvement that would handle copyObject cases automatically,
so I let that be for now.
Since 69f4b9c plain expression evaluation (and thus normal projection)
can't return sets of tuples anymore. Thus remove code dealing with
that possibility.
This will require adjustments in external code using
ExecEvalExpr()/ExecProject() - that should neither be hard nor very
common.
Author: Andres Freund and Tom Lane
Discussion: https://postgr.es/m/20160822214023.aaxz5l4igypowyri@alap3.anarazel.de
This patch makes several changes that improve the consistency of
representation of lists of statements. It's always been the case
that the output of parse analysis is a list of Query nodes, whatever
the types of the individual statements in the list. This patch brings
similar consistency to the outputs of raw parsing and planning steps:
* The output of raw parsing is now always a list of RawStmt nodes;
the statement-type-dependent nodes are one level down from that.
* The output of pg_plan_queries() is now always a list of PlannedStmt
nodes, even for utility statements. In the case of a utility statement,
"planning" just consists of wrapping a CMD_UTILITY PlannedStmt around
the utility node. This list representation is now used in Portal and
CachedPlan plan lists, replacing the former convention of intermixing
PlannedStmts with bare utility-statement nodes.
Now, every list of statements has a consistent head-node type depending
on how far along it is in processing. This allows changing many places
that formerly used generic "Node *" pointers to use a more specific
pointer type, thus reducing the number of IsA() tests and casts needed,
as well as improving code clarity.
Also, the post-parse-analysis representation of DECLARE CURSOR is changed
so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained
SELECT remains a child of the DeclareCursorStmt rather than getting flipped
around to be the other way. It's now true for both Query and PlannedStmt
that utilityStmt is non-null if and only if commandType is CMD_UTILITY.
That allows simplifying a lot of places that were testing both fields.
(I think some of those were just defensive programming, but in many places,
it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.)
Because PlannedStmt carries a canSetTag field, we're also able to get rid
of some ad-hoc rules about how to reconstruct canSetTag for a bare utility
statement; specifically, the assumption that a utility is canSetTag if and
only if it's the only one in its list. While I see no near-term need for
relaxing that restriction, it's nice to get rid of the ad-hocery.
The API of ProcessUtility() is changed so that what it's passed is the
wrapper PlannedStmt not just the bare utility statement. This will affect
all users of ProcessUtility_hook, but the changes are pretty trivial; see
the affected contrib modules for examples of the minimum change needed.
(Most compilers should give pointer-type-mismatch warnings for uncorrected
code.)
There's also a change in the API of ExplainOneQuery_hook, to pass through
cursorOptions instead of expecting hook functions to know what to pick.
This is needed because of the DECLARE CURSOR changes, but really should
have been done in 9.6; it's unlikely that any extant hook functions
know about using CURSOR_OPT_PARALLEL_OK.
Finally, teach gram.y to save statement boundary locations in RawStmt
nodes, and pass those through to Query and PlannedStmt nodes. This allows
more intelligent handling of cases where a source query string contains
multiple statements. This patch doesn't actually do anything with the
information, but a follow-on patch will. (Passing this information through
cleanly is the true motivation for these changes; while I think this is all
good cleanup, it's unlikely we'd have bothered without this end goal.)
catversion bump because addition of location fields to struct Query
affects stored rules.
This patch is by me, but it owes a good deal to Fabien Coelho who did
a lot of preliminary work on the problem, and also reviewed the patch.
Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
There's no good reason why plpgsql's GET DIAGNOSTICS statement can't
support an array element as target variable, since the execution code
already uses the generic exec_assign_value() function to assign to it.
Hence, refactor the grammar to allow that, by making getdiag_target
depend on the assign_var production.
Ideally we'd also let a cursor_variable expand to an element of a
refcursor[] array, but that's substantially harder since those statements
also have to handle bound-cursor-variable cases. For now, just make sure
the reported error is sensible, ie "cursor variable must be a simple
variable" not "variable must be of type cursor or refcursor". The latter
was quite confusing from the user's viewpoint, since what he wrote
satisfies the claimed restriction.
Per bug #14463 from Zhou Digoal. Given the lack of previous complaints,
I see no need for a back-patch.
Discussion: https://postgr.es/m/20161213152548.14897.81245@wrigleys.postgresql.org
The idea behind SPI_push was to allow transitioning back into an
"unconnected" state when a SPI-using procedure calls unrelated code that
might or might not invoke SPI. That sounds good, but in practice the only
thing it does for us is to catch cases where a called SPI-using function
forgets to call SPI_connect --- which is a highly improbable failure mode,
since it would be exposed immediately by direct testing of said function.
As against that, we've had multiple bugs induced by forgetting to call
SPI_push/SPI_pop around code that might invoke SPI-using functions; these
are much harder to catch and indeed have gone undetected for years in some
cases. And we've had to band-aid around some problems of this ilk by
introducing conditional push/pop pairs in some places, which really kind
of defeats the purpose altogether; if we can't draw bright lines between
connected and unconnected code, what's the point?
Hence, get rid of SPI_push[_conditional], SPI_pop[_conditional], and the
underlying state variable _SPI_curid. It turns out SPI_restore_connection
can go away too, which is a nice side benefit since it was never more than
a kluge. Provide no-op macros for the deleted functions so as to avoid an
API break for external modules.
A side effect of this removal is that SPI_palloc and allied functions no
longer permit being called when unconnected; they'll throw an error
instead. The apparent usefulness of the previous behavior was a mirage
as well, because it was depended on by only a few places (which I fixed in
preceding commits), and it posed a risk of allocations being unexpectedly
long-lived if someone forgot a SPI_push call.
Discussion: <20808.1478481403@sss.pgh.pa.us>
Invent a new function heap_modify_tuple_by_cols() that is functionally
equivalent to SPI_modifytuple except that it always allocates its result
by simple palloc. I chose however to make the API details a bit more
like heap_modify_tuple: pass a tupdesc rather than a Relation, and use
bool convention for the isnull array.
Use this function in place of SPI_modifytuple at all call sites where the
intended behavior is to allocate in current context. (There actually are
only two call sites left that depend on the old behavior, which makes me
wonder if we should just drop this function rather than keep it.)
This new function is easier to use than heap_modify_tuple() for purposes
of replacing a single column (or, really, any fixed number of columns).
There are a number of places where it would simplify the code to change
over, but I resisted that temptation for the moment ... everywhere except
in plpgsql's exec_assign_value(); changing that might offer some small
performance benefit, so I did it.
This is on the way to removing SPI_push/SPI_pop, but it seems like
good code cleanup in its own right.
Discussion: <9633.1478552022@sss.pgh.pa.us>
We must do this in case the expression evaluation results in calling
another plpgsql function (or, really, anything using SPI). I missed
the need for this when I converted exec_cast_value() from doing a
simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b.
There is a SPI_push_conditional in InputFunctionCall(), so that there
was no bug before that.
Per bug #14414 from Marcos Castedo. Add a regression test based on his
example, which was that a plpgsql function in a domain check constraint
didn't work when assigning to a domain-type variable within plpgsql.
Report: <20161106010947.1387.66380@wrigleys.postgresql.org>
Teach the parser to reject misplaced set-returning functions during parse
analysis using p_expr_kind, in much the same way as we do for aggregates
and window functions (cf commit eaccfded9). While this isn't complete
(it misses nesting-based restrictions), it's much better than the previous
error reporting for such cases, and it allows elimination of assorted
ad-hoc expression_returns_set() error checks. We could add nesting checks
later if it seems important to catch all cases at parse time.
There is one case the parser will now throw error for although previous
versions allowed it, which is SRFs in the tlist of an UPDATE. That never
behaved sensibly (since it's ill-defined which generated row should be
used to perform the update) and it's hard to see why it should not be
treated as an error. It's a release-note-worthy change though.
Also, add a new Query field hasTargetSRFs reporting whether there are
any SRFs in the targetlist (including GROUP BY/ORDER BY expressions).
The parser can now set that basically for free during parse analysis,
and we can use it in a number of places to avoid expression_returns_set
searches. (There will be more such checks soon.) In some places, this
allows decontorting the logic since it's no longer expensive to check for
SRFs in the tlist --- so I made the checks parallel to the handling of
hasAggs/hasWindowFuncs wherever it seemed appropriate.
catversion bump because adding a Query field changes stored rules.
Andres Freund and Tom Lane
Discussion: <24639.1473782855@sss.pgh.pa.us>
plpgsql.h defines a number of enums, but most of the code passes them
around as ints. Update structs and function prototypes to take enum
types instead. This clarifies the struct definitions in plpgsql.h in
particular.
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls
had typos in the context-sizing parameters. While none of these led to
especially significant problems, they did create minor inefficiencies,
and it's now clear that expecting people to copy-and-paste those calls
accurately is not a great idea. Let's reduce the risk of future errors
by introducing single macros that encapsulate the common use-cases.
Three such macros are enough to cover all but two special-purpose contexts;
those two calls can be left as-is, I think.
While this patch doesn't in itself improve matters for third-party
extensions, it doesn't break anything for them either, and they can
gradually adopt the simplified notation over time.
In passing, change TopMemoryContext to use the default allocation
parameters. Formerly it could only be extended 8K at a time. That was
probably reasonable when this code was written; but nowadays we create
many more contexts than we did then, so that it's not unusual to have a
couple hundred K in TopMemoryContext, even without considering various
dubious code that sticks other things there. There seems no good reason
not to let it use growing blocks like most other contexts.
Back-patch to 9.6, mostly because that's still close enough to HEAD that
it's easy to do so, and keeping the branches in sync can be expected to
avoid some future back-patching pain. The bugs fixed by these changes
don't seem to be significant enough to justify fixing them further back.
Discussion: <21072.1472321324@sss.pgh.pa.us>
These types are storage-compatible with real arrays, but they don't support
toasting, so of course they can't support expansion either.
Per bug #14289 from Michael Overmeyer. Back-patch to 9.5 where expanded
arrays were introduced.
Report: <20160818174414.1529.37913@wrigleys.postgresql.org>
This file had some unusual comment layout. Most of the comments
introducing structs ended up to the right of the screen and following
the start of the struct. Some comments for struct members ended up
after the member definition.
Fix that by moving comments consistently before what they are
describing. Also add missing struct tags where missing so that it is
easier to tell what the struct is.
In some cases, exiting out of a plpgsql statement due to an error, then
catching the error in a surrounding exception block, led to leakage of
temporary data the statement was working with, because we kept all such
data in the function-lifespan SPI Proc context. Iterating such behavior
many times within one function call thus led to noticeable memory bloat.
To fix, create an additional memory context meant to have statement
lifespan. Since many plpgsql statements, particularly the simpler/more
common ones, don't need this, create it only on demand. Reset this context
at the end of any statement that uses it, and arrange for exception cleanup
to reset it too, thereby fixing the memory-leak issue. Allow a stack of
such contexts to exist to handle cases where a compound statement needs
statement-lifespan data that persists across calls of inner statements.
While at it, clean up code and improve comments referring to the existing
short-term memory context, which by plpgsql convention is the per-tuple
context of the eval_econtext ExprContext. We now uniformly refer to that
as the eval_mcontext, whereas the new statement-lifespan memory contexts
are called stmt_mcontext.
This change adds some context-creation overhead, but on the other hand
it allows removal of some retail pfree's in favor of context resets.
On balance it seems to be about a wash performance-wise.
In principle this is a bug fix, but it seems too invasive for a back-patch,
and the infrequency of complaints weighs against taking the risk in the
back branches. So we'll fix it only in HEAD, at least for now.
Tom Lane, reviewed by Pavel Stehule
Discussion: <17863.1469142152@sss.pgh.pa.us>
We implement a dozen or so parameterless functions that the SQL standard
defines special syntax for. Up to now, that was done by converting them
into more or less ad-hoc constructs such as "'now'::text::date". That's
messy for multiple reasons: it exposes what should be implementation
details to users, and performance is worse than it needs to be in several
cases. To improve matters, invent a new expression node type
SQLValueFunction that can represent any of these parameterless functions.
Bump catversion because this changes stored parsetrees for rules.
Discussion: <30058.1463091294@sss.pgh.pa.us>
Since IMPORT FOREIGN SCHEMA has an INTO clause, pl/pgsql needs to be
aware of that and avoid capturing the INTO as an INTO-variables clause.
This isn't hard, though it's annoying to have to make IMPORT a plpgsql
keyword just for this. (Fortunately, we have the infrastructure now
to make it an unreserved keyword, so at least this shouldn't break any
existing pl/pgsql code.)
Per report from Merlin Moncure. Back-patch to 9.5 where IMPORT FOREIGN
SCHEMA was introduced.
Report: <CAHyXU0wpHf2bbtKGL1gtUEFATCY86r=VKxfcACVcTMQ70mCyig@mail.gmail.com>
This patch widens SPI_processed, EState's es_processed field, PortalData's
portalPos field, FuncCallContext's call_cntr and max_calls fields,
ExecutorRun's count argument, PortalRunFetch's result, and the max number
of rows in a SPITupleTable to uint64, and deals with (I hope) all the
ensuing fallout. Some of these values were declared uint32 before, and
others "long".
I also removed PortalData's posOverflow field, since that logic seems
pretty useless given that portalPos is now always 64 bits.
The user-visible results are that command tags for SELECT etc will
correctly report tuple counts larger than 4G, as will plpgsql's GET
GET DIAGNOSTICS ... ROW_COUNT command. Queries processing more tuples
than that are still not exactly the norm, but they're becoming more
common.
Most values associated with FETCH/MOVE distances, such as PortalRun's count
argument and the count argument of most SPI functions that have one, remain
declared as "long". It's not clear whether it would be worth promoting
those to int64; but it would definitely be a large dollop of additional
API churn on top of this, and it would only help 32-bit platforms which
seem relatively less likely to see any benefit.
Andreas Scherbaum, reviewed by Christian Ullrich, additional hacking by me
Commit d1b7c1ffe72e86932b5395f29e006c3f503bc53d introduced a mechanism
for serializing a ParamListInfo structure to be passed to a parallel
worker. However, this mechanism failed to handle external expanded
values, as pointed out by Noah Misch. Repair.
Moreover, plpgsql_param_fetch requires adjustment because the
serialization mechanism needs it to skip evaluating unused parameters
just as we would do when it is called from copyParamList, but params
== estate->paramLI in that case. To fix, make the bms_is_member test
in that function unconditional.
Finally, have setup_param_list set a new ParamListInfo field,
paramMask, to the parameters actually used in the expression, so that
we don't try to fetch those that are not needed when serializing a
parameter list. This isn't necessary for correctness, but it makes
the performance of the parallel executor code comparable to what we
do for cases involving cursors.
Design suggestions and extensive review by Noah Misch. Patch by me.
Commit 924bcf4f16d54c55310b28f77686608684734f42 introduced a framework
for parallel computation in PostgreSQL that makes most but not all
built-in functions safe to execute in parallel mode. In order to have
parallel query, we'll need to be able to determine whether that query
contains functions (either built-in or user-defined) that cannot be
safely executed in parallel mode. This requires those functions to be
labeled, so this patch introduces an infrastructure for that. Some
functions currently labeled as safe may need to be revised depending on
how pending issues related to heavyweight locking under paralllelism
are resolved.
Parallel plans can't be used except for the case where the query will
run to completion. If portal execution were suspended, the parallel
mode restrictions would need to remain in effect during that time, but
that might make other queries fail. Therefore, this patch introduces
a framework that enables consideration of parallel plans only when it
is known that the plan will be run to completion. This probably needs
some refinement; for example, at bind time, we do not know whether a
query run via the extended protocol will be execution to completion or
run with a limited fetch count. Having the client indicate its
intentions at bind time would constitute a wire protocol break. Some
contexts in which parallel mode would be safe are not adjusted by this
patch; the default is not to try parallel plans except from call sites
that have been updated to say that such plans are OK.
This commit doesn't introduce any parallel paths or plans; it just
provides a way to determine whether they could potentially be used.
I'm committing it on the theory that the remaining parallel sequential
scan patches will also get committed to this release, hopefully in the
not-too-distant future.
Robert Haas and Amit Kapila. Reviewed (in earlier versions) by Noah
Misch.
Remove the code in plpgsql that suppressed the innermost line of CONTEXT
for messages emitted by RAISE commands. That was never more than a quick
backwards-compatibility hack, and it's pretty silly in cases where the
RAISE is nested in several levels of function. What's more, it violated
our design theory that verbosity of error reports should be controlled
on the client side not the server side.
To alleviate the resulting noise increase, introduce a feature in libpq
and psql whereby the CONTEXT field of messages can be suppressed, either
always or only for non-error messages. Printing CONTEXT for errors only
is now their default behavior.
The actual code changes here are pretty small, but the effects on the
regression test outputs are widespread. I had to edit some of the
alternative expected outputs by hand; hopefully the buildfarm will soon
find anything I fat-fingered.
In passing, fix up (again) the output line counts in psql's various
help displays. Add some commentary about how to verify them.
Pavel Stehule, reviewed by Petr Jelínek, Jeevan Chalke, and others