> =================================================================
> User interface proposal for multi-row function targetlist entries
> =================================================================
> 1. Only one targetlist entry may return a set.
> 2. Each targetlist item (other than the set returning one) is
> repeated for each item in the returned set.
>
Having gotten no objections (actually, no response at all), I can only
assume no one had heartburn with this change. The attached patch covers
the first of the two proposals, i.e. restricting the target list to only
one set returning function.
Joe Conway
longer works -- IncrHeapAccessStat() didn't actually *do* anything
anymore, so no reason to keep it around AFAICS. I also fixed a
grammatical error in a comment.
Neil Conway
takes two parameters, an OID x and an integer y, and returns "true" with
probability 1/y (the OID argument is ignored). This can be useful -- for
example, it can be used to select a random sampling of the rows in a
table (which is what the "random" regression test uses it for).
This patch removes that function, because it was old and messy. The old
function had the following problems:
- it was undocumented
- it was poorly named
- it was designed to workaround an optimizer bug that no longer exists
(the OID argument is to ensure that the optimizer won't optimize away
calls to the function; AFAIK marking the function as 'volatile' suffices
nowadays)
- it used a different random-number generation technique than the other
PSRNG-related functions in the backend do (it called random() like they
do, but it had its own logic for setting a set and deciding when to
reseed the RNG).
Ok, this patch removes oidrand(), oidsrand(), and userfntest(), and
improves the SGML docs a little bit (un-commenting the setseed()
documentation).
Neil Conway
On Wed, 2003-01-08 at 21:59, Christopher Kings-Lynne wrote:
> I agree. I want to remove OIDs from heaps of our tables when we go to 7.3.
> I'd rather not have to do it in the dump due to down time.
Rod Taylor <rbt@rbt.ca>
> User interface proposal for multi-row function targetlist entries
> =================================================================
> 1. Only one targetlist entry may return a set.
> 2. Each targetlist item (other than the set returning one) is
> repeated for each item in the returned set.
>
Having gotten no objections (actually, no response at all), I can only assume
no one had heartburn with this change. The attached patch covers the first of
the two proposals, i.e. restricting the target list to only one set returning
function.
It compiles cleanly, and passes all regression tests. If there are no
objections, please apply.
Any suggestions on where this should be documented (other than maybe sql-select)?
Thanks,
Joe
p.s. Here's what the previous example now looks like:
CREATE TABLE bar(f1 int, f2 text, f3 int);
INSERT INTO bar VALUES(1, 'Hello', 42);
INSERT INTO bar VALUES(2, 'Happy', 45);
CREATE TABLE foo(a int, b text);
INSERT INTO foo VALUES(42, 'World');
INSERT INTO foo VALUES(42, 'Everyone');
INSERT INTO foo VALUES(45, 'Birthday');
INSERT INTO foo VALUES(45, 'New Year');
CREATE TABLE foo2(a int, b text);
INSERT INTO foo2 VALUES(42, '!!!!');
INSERT INTO foo2 VALUES(42, '????');
INSERT INTO foo2 VALUES(42, '####');
INSERT INTO foo2 VALUES(45, '$$$$');
CREATE OR REPLACE FUNCTION getfoo(int) RETURNS SETOF text AS '
SELECT b FROM foo WHERE a = $1
' language 'sql';
CREATE OR REPLACE FUNCTION getfoo2(int) RETURNS SETOF text AS '
SELECT b FROM foo2 WHERE a = $1
' language 'sql';
regression=# SELECT f1, f2, getfoo(f3) AS f4 FROM bar;
f1 | f2 | f4
----+-------+----------
1 | Hello | World
1 | Hello | Everyone
2 | Happy | Birthday
2 | Happy | New Year
(4 rows)
regression=# SELECT f1, f2, getfoo(f3) AS f4, getfoo2(f3) AS f5 FROM bar;
ERROR: Only one target list entry may return a set result
Joe Conway
codes, per discussion from last March. parse.h should now be included
*only* by gram.y, scan.l, keywords.c, parser.c. This prevents surprising
misbehavior after seemingly-trivial grammar adjustments.
rid of the assumption that sizeof(Oid)==sizeof(int). This is one small
step towards someday supporting 8-byte OIDs. For the moment, it doesn't
do much except get rid of a lot of unsightly casts.
locParam lists can be converted to bitmapsets to speed updating. Also,
replace 'locParam' with 'allParam', which contains all the paramIDs
relevant to the node (i.e., the union of extParam and locParam); this
saves a step during SetChangedParamList() without costing anything
elsewhere.
Instead of grovelling through pg_class to find them, make use of the
handy dandy dependency mechanism: just delete everything that depends
on our temp schema. Unlike the pg_class scan, the dependency mechanism
is smart enough to delete things in an order that doesn't fall foul of
any dependency restrictions. Fixes problem reported by David Heggie:
a temp table with a serial column may cause a backend FATAL exit at
shutdown time, if it chances to try to delete the temp sequence first.
expression accepted by the regex operators, per discussion yesterday.
Along the way, reduce deadlock_timeout from PGC_POSTMASTER to PGC_SIGHUP
category. It is probably best to insist that all backends share the same
setting, but that doesn't mean it has to be frozen at startup.
(extracted from Tcl 8.4.1 release, as Henry still hasn't got round to
making it a separate library). This solves a performance problem for
multibyte, as well as upgrading our regexp support to match recent Tcl
and nearly match recent Perl.
startup, not in the parser; this allows ALTER DOMAIN to work correctly
with domain constraint operations stored in rules. Rod Taylor;
code review by Tom Lane.
nodes where it's not really necessary. In many cases where the scan node
is not the topmost plan node (eg, joins, aggregation), it's possible to
just return the table tuple directly instead of generating an intermediate
projection tuple. In preliminary testing, this reduced the CPU time
needed for 'SELECT COUNT(*) FROM foo' by about 10%.
value of MAX_TIME_PRECISION in floating-point-timestamp-storage case
from 13 to 10, which is as much as time_out is actually willing to print.
(The alternative of increasing the number of digits we are willing to
print looks risky; we might find ourselves printing roundoff garbage.)
passed to join selectivity estimators. Make use of this in eqjoinsel
to derive non-bogus selectivity for IN clauses. Further tweaking of
cost estimation for IN.
initdb forced because of pg_proc.h changes.
Try to model the effect of rescanning input tuples in mergejoins;
account for JOIN_IN short-circuiting where appropriate. Also, recognize
that mergejoin and hashjoin clauses may now be more than single operator
calls, so we have to charge appropriate execution costs.
necessarily following the JOIN syntax to develop the query plan. The old
behavior is still available by setting GUC variable JOIN_COLLAPSE_LIMIT
to 1. Also create a GUC variable FROM_COLLAPSE_LIMIT to control the
similar decision about when to collapse sub-SELECT lists into their parent
lists. (This behavior existed already, but the limit was always
GEQO_THRESHOLD/2; now it's separately adjustable.)
of the socket file and socket lock file; this should prevent both of them
from being removed by even the stupidest varieties of /tmp-cleaning
script. Per suggestion from Giles Lean.
of known-equal expressions includes any constant expressions (including
Params from outer queries), we actively suppress any 'var = var'
clauses that are or could be deduced from the set, generating only the
deducible 'var = const' clauses instead. The idea here is to push down
the restrictions implied by the equality set to base relations whenever
possible. Once we have applied the 'var = const' clauses, the 'var = var'
clauses are redundant, and should be suppressed both to save work at
execution and to avoid double-counting restrictivity.
that's selecting into a RECORD variable returns zero rows, make it
assign an all-nulls row to the RECORD; this is consistent with what
happens when the SELECT INTO target is not a RECORD. In support of
this, tweak the SPI code so that a valid tuple descriptor is returned
even when a SPI select returns no rows.
There are two implementation techniques: the executor understands a new
JOIN_IN jointype, which emits at most one matching row per left-hand row,
or the result of the IN's sub-select can be fed through a DISTINCT filter
and then joined as an ordinary relation.
Along the way, some minor code cleanup in the optimizer; notably, break
out most of the jointree-rearrangement preprocessing in planner.c and
put it in a new file prep/prepjointree.c.
that used to do it in planner. That was an ancient kluge that was
never satisfactory; errors should be detected at parse time when possible.
But at the time we didn't have the support mechanism (expression_tree_walker
et al) to make it convenient to do in the parser.
simplify callers. It turns out the common case is that the caller
does want to recurse into sub-queries, so push support for that into
these subroutines.
datetime token tables. Even more embarrassing, the regression tests
revealed some of the problems --- but evidently the bogus output wasn't
questioned. Add code to postmaster startup to directly check the tables
for correct ordering, in hopes of not being embarrassed like this again.
join_references(), it's practical to consolidate all join_references()
processing into the set_plan_references traversal in setrefs.c. This
seems considerably cleaner than the old way where we did it for join
quals in createplan.c and for targetlists in setrefs.c.
containing a volatile function), rather than only on 'Var = Var' clauses
as before. This makes it practical to do flatten_join_alias_vars at the
start of planning, which in turn eliminates a bunch of klugery inside the
planner to deal with alias vars. As a free side effect, we now detect
implied equality of non-Var expressions; for example in
SELECT ... WHERE a.x = b.y and b.y = 42
we will deduce a.x = 42 and use that as a restriction qual on a. Also,
we can remove the restriction introduced 12/5/02 to prevent pullup of
subqueries whose targetlists contain sublinks.
Still TODO: make statistical estimation routines in selfuncs.c and costsize.c
smarter about expressions that are more complex than plain Vars. The need
for this is considerably greater now that we have to be able to estimate
the suitability of merge and hash join techniques on such expressions.
a qualification clause (and hence can get away with being sloppy about
distinguishing FALSE from UNKNOWN). We need to know this in subselect.c;
marking the subplans in setrefs.c is too late.
costs for expression evaluation, not only per-tuple cost as before.
This extension is needed in order to deal realistically with hashed or
materialized sub-selects.
Simplify SubLink by storing just a List of operator OIDs, instead of
a list of incomplete OpExprs --- that was a bizarre and bulky choice,
with no redeeming social value since we have to build new OpExprs
anyway when forming the plan tree.
'NOT (x IN (subselect))', that is 'NOT (x = ANY (subselect))',
rather than 'x <> ALL (subselect)' as we formerly did. This
opens the door to optimizing NOT IN the same way as IN, whereas
there's no hope of optimizing the expression using <>. Also,
convert 'x <> ALL (subselect)' to the NOT(IN) style, so that
the optimization will be available when processing rules dumped
by older Postgres versions.
initdb forced due to small change in SubLink node representation.
causes interval rounding not to work as expected in 7.3, for example
SELECT '18:17:15.6'::interval(0) does not round the value.
I did not force initdb, but one is needed to install the added row.
the index AM when we know we are fetching a unique row. However, this
logic did not consider the possibility that it would be asked to fetch
backwards. Also fix mark/restore to work correctly in this scenario.