eliminating some wildly inconsistent coding in various parts of the
system. I set MAXPGPATH = 1024 in config.h.in. If anyone is really
convinced that there ought to be a configure-time test to set the
value, go right ahead ... but I think it's a waste of time.
a generalized module 'tuplesort.c' that can sort either HeapTuples or
IndexTuples, and is not tied to execution of a Sort node. Clean up
memory leakages in sorting, and replace nbtsort.c's private implementation
of mergesorting with calls to tuplesort.c.
recycle storage within sort temp file on a block-by-block basis. This
reduces peak disk usage to essentially just the volume of data being
sorted, whereas it had been about 4x the data volume before.
BufFile so that it handles multi-segment temporary files transparently.
This allows sorts and hashes to work with data exceeding 2Gig (or whatever
the local limit on file size is). Change psort.c to use relative seeks
instead of absolute seeks for backwards scanning, so that it won't fail
when the data volume exceeds 2Gig.
expressions in CREATE TABLE. There is no longer an emasculated expression
syntax for these things; it's full a_expr for constraints, and b_expr
for defaults (unfortunately the fact that NOT NULL is a part of the
column constraint syntax causes a shift/reduce conflict if you try a_expr.
Oh well --- at least parenthesized boolean expressions work now). Also,
stored expression for a column default is not pre-coerced to the column
type; we rely on transformInsertStatement to do that when the default is
actually used. This means "f1 datetime default 'now'" behaves the way
people usually expect it to.
BTW, all the support code is now there to implement ALTER TABLE ADD
CONSTRAINT and ALTER TABLE ADD COLUMN with a default value. I didn't
actually teach ALTER TABLE to call it, but it wouldn't be much work.
Implements the CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands.
TODO:
Generic builtin trigger procedures
Automatic execution of appropriate CREATE CONSTRAINT... at CREATE TABLE
Support of new trigger type in pg_dump
Swapping of huge # of events to disk
Jan
additional argument specifying the kind of lock to acquire/release (or
'NoLock' to do no lock processing). Ensure that all relations are locked
with some appropriate lock level before being examined --- this ensures
that relevant shared-inval messages have been processed and should prevent
problems caused by concurrent VACUUM. Fix several bugs having to do with
mismatched increment/decrement of relation ref count and mismatched
heap_open/close (which amounts to the same thing). A bogus ref count on
a relation doesn't matter much *unless* a SI Inval message happens to
arrive at the wrong time, which is probably why we got away with this
sloppiness for so long. Repair missing grab of AccessExclusiveLock in
DROP TABLE, ALTER/RENAME TABLE, etc, as noted by Hiroshi.
Recommend 'make clean all' after pulling this update; I modified the
Relation struct layout slightly.
Will post further discussion to pghackers list shortly.
This change seems necessary in conjunction with long queries, and it
cleans up some bogosity in connection with long EXPLAIN texts anyway.
Note that current libpq will accept any length error message (at least
until it runs out of memory); prior versions have a limit of 8K, but
will cleanly discard excess error text, so there shouldn't be any
big compatibility problems with old clients.
transaction abort --- before it only worked if there was exactly one level
of allocation context stacked in the blank portal. Now it does the right
thing for any depth, including zero...
has positive refcount, it is rebuilt from pg_class data. This ensures
that relcache entries will track changes made by other backends. Formerly,
a shared inval report would just be ignored if it happened to arrive while
the relcache entry was in use. Also, fix relcache to reset ref counts
to zero during transaction abort. Finally, change LockRelation() so that
it checks for shared inval reports after obtaining the lock. In this way,
once any kind of lock has been obtained on a rel, we can trust the relcache
entry to be up-to-date.
ALLOC_BIGCHUNK_LIMIT are always allocated as separate malloc() blocks,
and are free()d immediately upon pfree(). Also, if such a chunk is enlarged
with repalloc(), translate the operation into a realloc() so as to
minimize memory usage. Of course, these large chunks still get freed
automatically if the alloc set is reset.
I have set ALLOC_BIGCHUNK_LIMIT at 64K for now, but perhaps another
size would be better?
neqsel now behave as per my suggestions in pghackers a few days ago.
selectivity for < > <= >= should work OK for integral types as well, but
still need work for nonintegral types. Since these routines have never
actually executed before :-(, this may result in some significant changes
in the optimizer's choices of execution plans. Let me know if you see
any serious misbehavior.
CAUTION: THESE CHANGES REQUIRE INITDB. pg_statistic table has changed.
this one could be useful for people experiencing out-of-memory crashes while
executing queries which retrieve or use a very large number of tuples.
The problem happens when storage is allocated for functions results used in
a large query, for example:
select upper(name) from big_table;
select big_table.array[1] from big_table;
select count(upper(name)) from big_table;
This patch is a dirty hack that fixes the out-of-memory problem for the most
common cases, like the above ones. It is not the final solution for the
problem but it can work for some people, so I'm posting it.
The patch should be safe because all changes are under #ifdef. Furthermore
the feature can be enabled or disabled at runtime by the `free_tuple_memory'
options in the pg_options file. The option is disabled by default and must
be explicitly enabled at runtime to have any effect.
To enable the patch add the follwing line to Makefile.custom:
CUSTOM_COPT += -DFREE_TUPLE_MEMORY
To enable the option at runtime add the following line to pg_option:
free_tuple_memory=1
Massimo
the gettimeofday doesn't compile under Linux with glibc2 because
the DST_NONE constant is no more defined. It seems that this code
(written by me) has always be wrong but for some reason working.
From: Massimo Dal Zotto <dz@cs.unitn.it>
-d4 now prints compressed trees from nodeToString()
-d5 prints pretty trees via nodeDisplay()
new pg_options: pretty_plan, pretty_parse, pretty_rewritten
Jan
been applied. The patches are in the .tar.gz attachment at the end:
varchar-array.patch this patch adds support for arrays of bpchar() and
varchar(), which where always missing from postgres.
These datatypes can be used to replace the _char4,
_char8, etc., which were dropped some time ago.
block-size.patch this patch fixes many errors in the parser and other
program which happen with very large query statements
(> 8K) when using a page size larger than 8192.
This patch is needed if you want to submit queries
larger than 8K. Postgres supports tuples up to 32K
but you can't insert them because you can't submit
queries larger than 8K. My patch fixes this problem.
The patch also replaces all the occurrences of `8192'
and `1<<13' in the sources with the proper constants
defined in include files. You should now never find
8192 hardwired in C code, just to make code clearer.
--
Massimo Dal Zotto
to save a little bit of backend startup time. This way, the first
backend started after a VACUUM will rebuild the init file with up-to-date
statistics for the critical system indexes.