tests to return the correct results per SQL9x when given NULL inputs.
Reimplement these tests as well as IS [NOT] NULL to have their own
expression node types, instead of depending on special functions.
From Joe Conway, with a little help from Tom Lane.
for GRANT/REVOKE is now just that, not "CHANGE".
On the way, migrate some of the aclitem internal representation away from
the parser and build a real parse tree instead. Also add some 'const'
qualifiers.
of costsize.c routines to pass Query root, so that costsize can figure
more things out by itself and not be so dependent on its callers to tell
it everything it needs to know. Use selectivity of hash or merge clause
to estimate number of tuples processed internally in these joins
(this is more useful than it would've been before, since eqjoinsel is
somewhat more accurate than before).
report on old-style functions invoked by RI triggers. We had a number of
other places that were being sloppy about which memory context FmgrInfo
subsidiary data will be allocated in. Turns out none of them actually
cause a problem in 7.1, but this is for arcane reasons such as the fact
that old-style triggers aren't supported anyway. To avoid getting burnt
later, I've restructured the trigger support so that we don't keep trigger
FmgrInfo structs in relcache memory. Some other related cleanups too:
it's not really necessary to call fmgr_info at all while setting up
the index support info in relcache entries, because those ScanKeyEntry
structs are never used to invoke the functions. This should speed up
relcache initialization a tiny bit.
the same tuple slot that the raw tuple came from, because that slot has
the wrong tuple descriptor. Store it into its own slot with the correct
descriptor, instead. This repairs problems with SPI functions seeing
inappropriate tuple descriptors --- for example, plpgsql code failing to
cope with SELECT FOR UPDATE.
create_index_paths are not immediately discarded, but are available for
subsequent planner work. This allows avoiding redundant syscache lookups
in several places. Change interface to operator selectivity estimation
procedures to allow faster and more flexible estimation.
Initdb forced due to change of pg_proc entries for selectivity functions!
trees (mostly my fault). Repair. Also fix long-standing bug in ExecReplace:
after recomputing a concurrently updated tuple, we must recheck constraints.
Make EvalPlanQual leak memory with somewhat less enthusiasm than before,
although plugging leaks fully will require more changes than I care to risk
in a dot-release.
a separate statement (though it can still be invoked as part of VACUUM, too).
pg_statistic redesigned to be more flexible about what statistics are
stored. ANALYZE now collects a list of several of the most common values,
not just one, plus a histogram (not just the min and max values). Random
sampling is used to make the process reasonably fast even on very large
tables. The number of values and histogram bins collected is now
user-settable via an ALTER TABLE command.
There is more still to do; the new stats are not being used everywhere
they could be in the planner. But the remaining changes for this project
should be localized, and the behavior is already better than before.
A not-very-related change is that sorting now makes use of btree comparison
routines if it can find one, rather than invoking '<' twice.
clause with an alias is a <subquery> and therefore hides table references
appearing within it, according to the spec. This is the same as the
preliminary patch I posted to pgsql-patches yesterday, plus some really
grotty code in ruleutils.c to reverse-list a query tree with the correct
alias name depending on context. I'd rather not have done that, but unless
we want to force another initdb for 7.1, there's no other way for now.
allocated by plan nodes are not leaked at end of query. This doesn't
really matter for normal queries, but it sure does for queries invoked
repetitively inside SQL functions. Clean up some other grotty code
associated with tupdescs, and fix a few other memory leaks exposed by
tests with simple SQL functions.
and burn. Just for added luck, change reading of CONST nodes so that
we do not need to consult pg_type rows while reading them; this means
that no database access occurs during stringToNode. This requires
changing the order in which const-node fields are written, which means
an initdb is forced.
and revert documentation to describe the existing INHERITS clause
instead, per recent discussion in pghackers. Also fix implementation
of SQL_inheritance SET variable: it is not cool to look at this var
during the initial parsing phase, only during parse_analyze(). See
recent bug report concerning misinterpretation of date constants just
after a SET TIMEZONE command. gram.y really has to be an invariant
transformation of the query string to a raw parsetree; anything that
can vary with time must be done during parse analysis.
comparison does not consider paths different when they differ only in
uninteresting aspects of sort order. (We had a special case of this
consideration for indexscans already, but generalize it to apply to
ordered join paths too.) Be stricter about what is a canonical pathkey
to allow faster pathkey comparison. Cache canonical pathkeys and
dispersion stats for left and right sides of a RestrictInfo's clause,
to avoid repeated computation. Total speedup will depend on number of
tables in a query, but I see about 4x speedup of planning phase for
a sample seven-table query.
avoid repeated evaluations in cost_qual_eval(). This turns out to save
a useful fraction of planning time. No change to external representation
of RestrictInfo --- although that node type doesn't appear in stored
rules anyway.
not-very-good handling of mid-size allocation requests. Do everything via
either the "small" case (chunk size rounded up to power of 2) or the "large"
case (pass it straight off to malloc()). Increase the number of freelists
a little to set the breakpoint between these behaviors at 8K.
maintained for each cache entry. A cache entry will not be freed until
the matching ReleaseSysCache call has been executed. This eliminates
worries about cache entries getting dropped while still in use. See
my posting to pg-hackers of even date for more info.
cloned, rather than always cloning template1. Modify initdb to generate
two identical databases rather than one, template0 and template1.
Connections to template0 are disallowed, so that it will always remain
in its virgin as-initdb'd state. pg_dumpall now dumps databases with
restore commands that say CREATE DATABASE foo WITH TEMPLATE = template0.
This allows proper behavior when there is user-added data in template1.
initdb forced!
joins, and clean things up a good deal at the same time. Append plan node
no longer hacks on rangetable at runtime --- instead, all child tables are
given their own RT entries during planning. Concept of multiple target
tables pushed up into execMain, replacing bug-prone implementation within
nodeAppend. Planner now supports generating Append plans for inheritance
sets either at the top of the plan (the old way) or at the bottom. Expanding
at the bottom is appropriate for tables used as sources, since they may
appear inside an outer join; but we must still expand at the top when the
target of an UPDATE or DELETE is an inheritance set, because we actually need
a different targetlist and junkfilter for each target table in that case.
Fortunately a target table can't be inside an outer join... Bizarre mutual
recursion between union_planner and prepunion.c is gone --- in fact,
union_planner doesn't really have much to do with union queries anymore,
so I renamed it grouping_planner.
position() and substring() functions, so that it works transparently for
bit types as well. Alias the text functions appropriately.
Add position() for bit types.
Add new constant node T_BitString that represents literals of the form
B'1001 and pass those to zpbit type.
ExecutorRun. This allows LIMIT to work in a view. Also, LIMIT in a
cursor declaration will behave in a reasonable fashion, whereas before
it was overridden by the FETCH count.
as well allow DROP multiple INDEX, RULE, TYPE as well. Add missing
CommandCounterIncrement to DROP loop, which could cause trouble otherwise
with multiple DROP of items affecting same catalog entries. Try to
bring a little consistency to various error messages using 'does not exist',
'nonexistent', etc --- I standardized on 'does not exist' since that's
what the vast majority of the existing uses seem to be.
This patch forces the use of 'DROP VIEW' to destroy views.
It also changes the syntax of DROP VIEW to
DROP VIEW v1, v2, ...
to match the syntax of DROP TABLE.
Some error messages were changed so this patch also includes changes to the
appropriate expected/*.out files.
Doc changes for 'DROP TABLE" and 'DROP VIEW' are included.
--
Mark Hollomon
took some rejiggering of typename and ACL parsing, as well as moving
parse_analyze call out of parser(). Restructure postgres.c processing
so that parse analysis and rewrite are skipped when in abort-transaction
state. Only COMMIT and ABORT statements will be processed beyond the raw
parser() phase. This addresses problem of parser failing with database access
errors while in aborted state (see pghackers discussions around 7/28/00).
Also fix some bugs with COMMIT/ABORT statements appearing in the middle of
a single query input string.
Function, operator, and aggregate arguments/results can now use full
TypeName production, in particular foo[] for array types.
DROP OPERATOR and COMMENT ON OPERATOR were broken for unary operators.
Allow CREATE AGGREGATE to accept unquoted numeric constants for initcond.
SQL92 semantics, including support for ALL option. All three can be used
in subqueries and views. DISTINCT and ORDER BY work now in views, too.
This rewrite fixes many problems with cross-datatype UNIONs and INSERT/SELECT
where the SELECT yields different datatypes than the INSERT needs. I did
that by making UNION subqueries and SELECT in INSERT be treated like
subselects-in-FROM, thereby allowing an extra level of targetlist where the
datatype conversions can be inserted safely.
INITDB NEEDED!
(Don't forget that an alias is required.) Views reimplemented as expanding
to subselect-in-FROM. Grouping, aggregates, DISTINCT in views actually
work now (he says optimistically). No UNION support in subselects/views
yet, but I have some ideas about that. Rule-related permissions checking
moved out of rewriter and into executor.
INITDB REQUIRED!
for example, an SQL function can be used in a functional index. (I make
no promises about speed, but it'll work ;-).) Clean up and simplify
handling of functions returning sets.
right circumstances a hash join executed as a DECLARE CURSOR/FETCH
query would crash the backend. Problem as seen in current sources was
that the hash tables were stored in a context that was a child of
TransactionCommandContext, which got zapped at completion of the FETCH
command --- but cursor cleanup executed at COMMIT expected the tables
to still be valid. I haven't chased down the details as seen in 7.0.*
but I'm sure it's the same general problem.
right thing with variable-free clauses that contain noncachable functions,
such as 'WHERE random() < 0.5' --- these are evaluated once per
potential output tuple. Expressions that contain only Params are
now candidates to be indexscan quals --- for example, 'var = ($1 + 1)'
can now be indexed. Cope with RelabelType nodes atop potential indexscan
variables --- this oversight prevents 7.0.* from recognizing some
potentially indexscanable situations.
from Param nodes, per discussion a few days ago on pghackers. Add new
expression node type FieldSelect that implements the functionality where
it's actually needed. Clean up some other unused fields in Func nodes
as well.
NOTE: initdb forced due to change in stored expression trees for rules.
thing when there are multiple result relations. Formerly, during
something like 'UPDATE foo*', foo's constraints and *only* foo's
constraints would be applied to all foo's children. Wrong-o ...
There's now only one transition value and transition function.
NULL handling in aggregates is a lot cleaner. Also, use Numeric
accumulators instead of integer accumulators for sum/avg on integer
datatypes --- this avoids overflow at the cost of being a little slower.
Implement VARIANCE() and STDDEV() aggregates in the standard backend.
Also, enable new LIKE selectivity estimators by default. Unrelated
change, but as long as I had to force initdb anyway...