The parser got confused if a cursor parameter had the same name as
a plpgsql variable. Reported and diagnosed by Yeb Havinga, though
this isn't exactly his proposed fix.
Also, some mostly-but-not-entirely-cosmetic adjustments to the original
named-cursor-parameter patch, for code readability and better error
diagnostics.
An incorrect and entirely unnecessary "safety check" in exec_stmt_getdiag()
caused the code to treat an assignment to a variable with dno zero as a
no-op. Unfortunately, that's a perfectly valid dno. This has been broken
since GET DIAGNOSTICS was invented. It's not terribly surprising that the
bug went unnoticed for so long, since in most cases you probably wouldn't
use the function's first-created variable (normally its first parameter)
as a GET DIAGNOSTICS target. Nonetheless, it's broken. Per bug #6551
from Adam Buraczewski.
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
Failing to do so causes trigger invocation to fail when they are nested
within a function invocation that changes the current package.
Backpatch to 9.1; previous releases used a different method to obtain
_TD. Per bug report from Mark Murawski (bug #6511)
Author: Alex Hunsaker
Dave Malcolm of Red Hat is working on a static code analysis tool for
Python-related C code. It reported a number of problems in plpython,
most of which were failures to check for NULL results from object-creation
functions, so would only be an issue in very-low-memory situations.
Patch in HEAD and 9.1. We could go further back but it's not clear that
these issues are important enough to justify the work.
Jan Urbański
This replaces the former global variable PLy_curr_procedure, and provides
a place to stash per-call-level information. In particular we create a
per-call-level scratch memory context.
For the moment, the scratch context is just used to avoid leaking memory
from datatype output function calls in PLyDict_FromTuple. There probably
will be more use-cases in future.
Although this is a fix for a pre-existing memory leakage bug, it seems
sufficiently invasive to not want to back-patch; it feels better as part
of the major rearrangement of plpython code that we've already done as
part of 9.2.
Jan Urbański
Datatype I/O functions are allowed to leak memory in CurrentMemoryContext,
since they are generally called in short-lived contexts. However, plpgsql
calls such functions for purposes of type conversion, and was calling them
in its procedure context. Therefore, any leaked memory would not be
recovered until the end of the plpgsql function. If such a conversion
was done within a loop, quite a bit of memory could get consumed. Fix by
calling such functions in the transient "eval_econtext", and adjust other
logic to match. Back-patch to all supported versions.
Andres Freund, Jan Urbański, Tom Lane
Don't quote the output of format_procedure(); it's already quoted quite
enough. Remove the fn_name field, which was now just dead weight. Fix
remaining expected-output files.
This got broken by commit 4c6cedd1b014abf2046886a9a92e10e18f0d658e,
which caused PL/pgsql error messages to print the function
signature, not just the name.
Per buildfarm.
Add result object functions .colnames, .coltypes, .coltypmods to
obtain information about the result column names and types, which was
previously not possible in the PL/Python SPI interface.
reviewed by Abhijit Menon-Sen
Certain things like typeglobs or readonly things like $^V cause
perl's SvPVutf8() to die nastily and crash the backend. To avoid
that bug we make a copy of the object, which will subsequently be
garbage collected.
Back patched to 9.1 where we first started using SvPVutf8().
Per -hackers discussion. Original problem reported by David Wheeler.
This moves the code around from one huge file into hopefully logical
and more manageable modules. For the most part, the code itself was
not touched, except: PLy_function_handler and PLy_trigger_handler were
renamed to PLy_exec_function and PLy_exec_trigger, because they were
not actually handlers in the PL handler sense, and it makes the naming
more similar to the way PL/pgSQL is organized. The initialization of
the procedure caches was separated into a new function
init_procedure_caches to keep the hash tables private to
plpy_procedures.c.
Jan Urbański and Peter Eisentraut
Add a function plpy.cursor that is similar to plpy.execute but uses an
SPI cursor to avoid fetching the entire result set into memory.
Jan Urbański, reviewed by Steve Singer
In the previous coding, callers were faced with an awkward choice:
look up the name, do permissions checks, and then lock the table; or
look up the name, lock the table, and then do permissions checks.
The first choice was wrong because the results of the name lookup
and permissions checks might be out-of-date by the time the table
lock was acquired, while the second allowed a user with no privileges
to interfere with access to a table by users who do have privileges
(e.g. if a malicious backend queues up for an AccessExclusiveLock on
a table on which AccessShareLock is already held, further attempts
to access the table will be blocked until the AccessExclusiveLock
is obtained and the malicious backend's transaction rolls back).
To fix, allow callers of RangeVarGetRelid() to pass a callback which
gets executed after performing the name lookup but before acquiring
the relation lock. If the name lookup is retried (because
invalidation messages are received), the callback will be re-executed
as well, so we get the best of both worlds. RangeVarGetRelid() is
renamed to RangeVarGetRelidExtended(); callers not wishing to supply
a callback can continue to invoke it as RangeVarGetRelid(), which is
now a macro. Since the only one caller that uses nowait = true now
passes a callback anyway, the RangeVarGetRelid() macro defaults nowait
as well. The callback can also be used for supplemental locking - for
example, REINDEX INDEX needs to acquire the table lock before the index
lock to reduce deadlock possibilities.
There's a lot more work to be done here to fix all the cases where this
can be a problem, but this commit provides the general infrastructure
and fixes the following specific cases: REINDEX INDEX, REINDEX TABLE,
LOCK TABLE, and and DROP TABLE/INDEX/SEQUENCE/VIEW/FOREIGN TABLE.
Per discussion with Noah Misch and Alvaro Herrera.
The old expression sed 's,$(srcdir),python3,' would normally resolve
as sed 's,.,python3,', which is not really what we wanted. While it
doesn't actually break anything right now, it's still wrong, so put in
a bit more work to make it more robust.
The original coding was
var->value = (Datum) state;
which is bogus, and then in commit 2f0f7b4bce13e68394543728801ef011fd82fac6
it was "corrected" to
var->value = PointerGetDatum(state);
which is a faithful translation but still wrong.
This seems purely cosmetic, though, so no need for a back-patch.
Pavel Stehule
distro version of perl.
David Wheeler and Alex Hunsaker.
Backpatch to 9.1 where it applies cleanly. A simple workaround is available for earlier
branches, and further effort doesn't seem warranted.
exception handler. This was a regression in 9.1, when the capability
to catch specific SPI errors was added, so backpatch to 9.1.
Mika Eloranta, with some editing by Jan Urbański.
This seems to have been just an oversight in previous foreign-table work.
A quick grep didn't turn up any other places where RELKIND_FOREIGN_TABLE
was obviously omitted.
One change noted by Alexander Soudakov, the other by me.
Back-patch to 9.1.
The original implementation of ELSIF in plpgsql converted the construct
into nested simple IF statements. This was prone to stack overflow with
long ELSIF lists, in two different ways. First, it's difficult to generate
the parsetree without using right-recursion in the bison grammar, and
that's prone to parser stack overflow since nothing can be reduced until
the whole list has been read. Second, we'd recurse during execution, thus
creating an unnecessary risk of execution-time stack overflow. Rewrite
so that the ELSIF list is represented as a flat list, scanned via iteration
not recursion, and generated through left-recursion in the grammar.
Per a gripe from Håvard Kongsgård.
This patch restores the pre-9.1 behavior that pl/perl functions returning
VOID ignore the result value of their last Perl statement. 9.1.0
unintentionally threw an error if the last statement returned a reference,
as reported by Amit Khandekar.
Also, make sure it works to return a string value for a composite type,
so long as the string meets the type's input format. We already allowed
the equivalent behavior for arrays, so it seems inconsistent to not allow
it for composites.
In addition, ensure we throw errors for attempts to return arrays or hashes
when the function's declared result type is not an array or composite type,
respectively. Pre-9.1 versions rather uselessly returned strings like
ARRAY(0x221a9a0) or HASH(0x221aa90), while 9.1.0 threw an error for the
hash case and returned a garbage value for the array case.
Also, clean up assorted grotty coding in Perl array conversion, including
use of a session-lifespan memory context to accumulate the array value
(resulting in session-lifespan memory leak on error), failure to apply the
declared typmod if any, and failure to detect some cases of non-rectangular
multi-dimensional arrays.
Alex Hunsaker and Tom Lane
We used to just remember the GucSource, but saving GucContext too provides
a little more information --- notably, whether a SET was done by a
superuser or regular user. This allows us to rip out the fairly dodgy code
that define_custom_variable used to use to try to infer the context to
re-install a pre-existing setting with. In particular, it now works for
a superuser to SET a extension's SUSET custom variable before loading the
associated extension, because GUC can remember whether the SET was done as
a superuser or not. The plperl regression tests contain an example where
this is useful.
This variable provides only marginal error-prevention capability (since
it can only check the prefix of a qualified GUC name), and the consensus
is that that isn't worth the amount of hassle that maintaining the setting
creates for DBAs. So, let's just remove it.
With this commit, the system will silently accept a value for any qualified
GUC name at all, whether it has anything to do with any known extension or
not. (Unqualified names still have to match known built-in settings,
though; and you will get a WARNING at extension load time if there's an
unrecognized setting with that extension's prefix.)
There's still some discussion ongoing about whether to tighten that up and
if so how; but if we do come up with a solution, it's not likely to look
anything like custom_variable_classes.
plpgsql's exec_stmt_execsql was Assert'ing that a CachedPlanSource was
is_valid immediately after exec_prepare_plan. The risk factor in this case
is that after building the prepared statement, exec_prepare_plan calls
exec_simple_check_plan, which might try to generate a generic plan --- and
with CLOBBER_CACHE_ALWAYS or other unusual causes of invalidation, that
could result in an invalidation. However, that path could only be taken
for a SELECT query, for which we need not set mod_stmt. So in this case
I think it's best to just remove the Assert; it's okay to look at a
slightly-stale querytree for what we need here. Per buildfarm testing.
Now that a NULL ParamListInfo pointer causes significantly different
behavior in plancache.c, be sure to pass it that way when the expression
is known not to reference any plpgsql variables. Saves a few setup
cycles anyway.
Rewrite plancache.c so that a "cached plan" (which is rather a misnomer
at this point) can support generation of custom, parameter-value-dependent
plans, and can make an intelligent choice between using custom plans and
the traditional generic-plan approach. The specific choice algorithm
implemented here can probably be improved in future, but this commit is
all about getting the mechanism in place, not the policy.
In addition, restructure the API to greatly reduce the amount of extraneous
data copying needed. The main compromise needed to make that possible was
to split the initial creation of a CachedPlanSource into two steps. It's
worth noting in particular that SPI_saveplan is now deprecated in favor of
SPI_keepplan, which accomplishes the same end result with zero data
copying, and no need to then spend even more cycles throwing away the
original SPIPlan. The risk of long-term memory leaks while manipulating
SPIPlans has also been greatly reduced. Most of this improvement is based
on use of the recently-added MemoryContextSetParent primitive.
The $(PERL) macro will be set by configure if it finds perl at all,
but $(perl_privlibexp) isn't configured unless you said --with-perl.
This results in confusing error messages if someone cd's into
src/pl/plperl and tries to build there despite the configure omission,
as reported by Tomas Vondra in bug #6198. Add simple checks to
provide a more useful report, while not disabling other use of the
makefile such as "make clean".
Back-patch to 9.0, which is as far as the patch applies easily.
walsender.h should depend on xlog.h, not vice versa. (Actually, the
inclusion was circular until a couple hours ago, which was even sillier;
but Bruce broke it in the expedient rather than logically correct
direction.) Because of that poor decision, plus blind application of
pgrminclude, we had a situation where half the system was depending on
xlog.h to include such unrelated stuff as array.h and guc.h. Clean up
the header inclusion, and manually revert a lot of what pgrminclude had
done so things build again.
This episode reinforces my feeling that pgrminclude should not be run
without adult supervision. Inclusion changes in header files in particular
need to be reviewed with great care. More generally, it'd be good if we
had a clearer notion of module layering to dictate which headers can sanely
include which others ... but that's a big task for another day.
Module initialization functions in Python 3 must have external
linkage, because PyMODINIT_FUNC does dllexport on Windows-like
platforms. Without this change, the build with Python 3 fails on
Windows.