that the types of untyped string-literal constants are deduced (ie,
when coerce_type is applied to 'em, that's what the type must be).
Remove the ancient hack of storing the input Param-types array as a
global variable, and put the info into ParseState instead. This touches
a lot of files because of adjustment of routine parameter lists, but
it's really not a large patch. Note: PREPARE statement still insists on
exact specification of parameter types, but that could easily be relaxed
now, if we wanted to do so.
I had inadvertently omitted it while rearranging things to support
length-counted incoming messages. Also, change the parser's API back
to accepting a 'char *' query string instead of 'StringInfo', as the
latter wasn't buying us anything except overhead. (I think when I put
it in I had some notion of making the parser API 8-bit-clean, but
seeing that flex depends on null-terminated input, that's not really
ever gonna happen.)
for tableID/columnID in RowDescription. (The latter isn't really
implemented yet though --- the backend always sends zeroes, and libpq
just throws away the data.)
initial values and runtime changes in selected parameters. This gets
rid of the need for an initial 'select pg_client_encoding()' query in
libpq, bringing us back to one message transmitted in each direction
for a standard connection startup. To allow server version to be sent
using the same GUC mechanism that handles other parameters, invent the
concept of a never-settable GUC parameter: you can 'show server_version'
but it's not settable by any GUC input source. Create 'lc_collate' and
'lc_ctype' never-settable parameters so that people can find out these
settings without need for pg_controldata. (These side ideas were all
discussed some time ago in pgsql-hackers, but not yet implemented.)
into a UNION that has some type coercions applied to the component
queries, so long as the qual itself does not reference any columns that
have such coercions. Per example from Jonathan Bartlett 24-Apr-03.
rewritten and the protocol is changed, but most elog calls are still
elog calls. Also, we need to contemplate mechanisms for controlling
all this functionality --- eg, how much stuff should appear in the
postmaster log? And what API should libpq expose for it?
have length words. COPY OUT reimplemented per new protocol: it doesn't
need \. anymore, thank goodness. COPY BINARY to/from frontend works,
at least as far as the backend is concerned --- libpq's PQgetline API
is not up to snuff, and will have to be replaced with something that is
null-safe. libpq uses message length words for performance improvement
(no cycles wasted rescanning long messages), but not yet for error
recovery.
with variable-width fields. No more truncation of long user names.
Also, libpq can now send its environment-variable-driven SET commands
as part of the startup packet, saving round trips to server.
expressions, ARRAY(sub-SELECT) expressions, some array functions.
Polymorphic functions using ANYARRAY/ANYELEMENT argument and return
types. Some regression tests in place, documentation is lacking.
Joe Conway, with some kibitzing from Tom Lane.
page when it's read in, per pghackers discussion around 17-Feb. Add a
GUC variable zero_damaged_pages that causes the response to be a WARNING
followed by zeroing the page, rather than the normal ERROR; this is per
Hiroshi's suggestion that there needs to be a way to get at the data
in the rest of the table.
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
them as arrays of the internal datatype. This requires treating the
stavalues columns as 'anyarray' rather than 'text[]', which is not 100%
kosher but seems to work fine for the purposes we need for pg_statistic.
Perhaps in the future 'anyarray' will be allowed more generally.
refers to a non-DISTINCT output column of a DISTINCT ON subquery, or
if it refers to a function-returning-set, we cannot push it down.
But the old implementation refused to push down *any* quals if the
subquery had any such 'dangerous' outputs. Now we just look at the
output columns actually referenced by each qual expression. More code
than before, but probably no slower since we don't make unnecessary checks.
some of the algorithms for higher functions. I see about a factor of ten
speedup on the 'numeric' regression test, but it's unlikely that that test
is representative of real-world applications.
initdb forced due to change of on-disk representation for NUMERIC.
Add ALTER SEQUENCE to modify min/max/increment/cache/cycle values
Also updated create sequence docs to mention NO MINVALUE, & NO MAXVALUE.
New Files:
doc/src/sgml/ref/alter_sequence.sgml
src/test/regress/expected/sequence.out
src/test/regress/sql/sequence.sql
ALTER SEQUENCE is NOT transactional. It behaves similarly to setval().
It matches the proposed SQL200N spec, as well as Oracle in most ways --
Oracle lacks RESTART WITH for some strange reason.
--
Rod Taylor <rbt@rbt.ca>
> >
> > - Add check in pg_dump to see if the value returned is the max /min
> > values and replace with NO MAXVALUE, NO MINVALUE.
> >
> > - Change START and INCREMENT to use START WITH and INCREMENT BY syntax.
> > This makes it a touch easier to port to other databases with sequences
> > (Oracle). PostgreSQL supports both syntaxes already.
>
> + char bufm[100],
> + bufx[100];
>
> This seems to be an arbitary size. Why not set it to the actual maximum
> length?
>
> Also:
>
> + snprintf(bufm, 100, INT64_FORMAT, SEQ_MINVALUE);
> + snprintf(bufx, 100, INT64_FORMAT, SEQ_MAXVALUE);
>
> sizeof(bufm), sizeof(bufx) is probably the more
> maintenance-friendly/standard way to do it.
I changed the code to use sizeof - but will wait for a response from
Peter before changing the size. It's consistent throughout the sequence
code to be 100 for this purpose.
Rod Taylor <rbt@rbt.ca>
> weird behavior across fork boundaries; (b) the additional memory space
> that has to be duplicated into child processes will cost something per
> child launch, even if the child never uses it. But these are only
> arguments that it might not *always* be a prudent thing to do, not that
> we shouldn't give the DBA the tool to do it if he wants. So fire away.
Here is a patch for the above, including a documentation update. It
creates a new GUC variable "preload_libraries", that accepts a list in
the form:
preload_libraries = '$libdir/mylib1:initfunc,$libdir/mylib2'
If ":initfunc" is omitted or not found, no initialization function is
executed, but the library is still preloaded. If "$libdir/mylib" isn't
found, the postmaster refuses to start.
In my testing with PL/R, it reduces the first call to a PL/R function
(after connecting) from almost 2 seconds, down to about 8 ms.
Joe Conway
utility statement (DeclareCursorStmt) with a SELECT query dangling from
it, rather than a SELECT query with a few unusual fields in it. Add
code to determine whether a planned query can safely be run backwards.
If DECLARE CURSOR specifies SCROLL, ensure that the plan can be run
backwards by adding a Materialize plan node if it can't. Without SCROLL,
you get an error if you try to fetch backwards from a cursor that can't
handle it. (There is still some discussion about what the exact
behavior should be, but this is necessary infrastructure in any case.)
Along the way, make EXPLAIN DECLARE CURSOR work.