Commit Graph

3451 Commits

Author SHA1 Message Date
26cbccd52c Add fsync() define for Win32 to cover cases other than wal_sync_method
where we need fsync().
2005-06-16 17:53:54 +00:00
2becf48483 Update catalog version for recent function additions. 2005-06-15 12:56:35 +00:00
c119c5bd49 Change the implementation of hash join to attempt to avoid unnecessary
work if either of the join relations are empty. The logic is:

(1) if the inner relation's startup cost is less than the outer
    relation's startup cost and this is not an outer join, read
    a single tuple from the inner relation via ExecHash()
      - if NULL, we're done

(2) read a single tuple from the outer relation
      - if NULL, we're done

(3) build the hash table on the inner relation
      - if hash table is empty and this is not an outer join,
        we're done

(4) otherwise, do hash join as usual

The implementation uses the new MultiExecProcNode API, per a
suggestion from Tom: invoking ExecHash() now produces the first
tuple from the Hash node's child node, whereas MultiExecHash()
builds the hash table.

I had to put in a bit of a kludge to get the row count returned
for EXPLAIN ANALYZE to be correct: since ExecHash() is invoked to
return a tuple, and then MultiExecHash() is invoked, we would
return one too many tuples to EXPLAIN ANALYZE. I hacked around
this by just manually detecting this situation and subtracting 1
from the EXPLAIN ANALYZE row count.
2005-06-15 07:27:44 +00:00
0851a6fbc7 This patch makes it possible to use the full set of timezones when doing
"AT TIME ZONE", and not just the shorlist previously available. For
example:

SELECT CURRENT_TIMESTAMP AT TIME ZONE 'Europe/London';

works fine now. It will also obey whatever DST rules were in effect at
just that date, which the previous implementation did not.

It also supports the AT TIME ZONE on the timetz datatype. The whole
handling of DST is a bit bogus there, so I chose to make it use whatever
DST rules are in effect at the time of executig the query. not sure if
anybody is actuallyi *using* timetz though, it seems pretty
unpredictable just because of this...

Magnus Hagander
2005-06-15 00:34:11 +00:00
5955945828 Support 3 and 4-byte unicode characters.
John Hansen
2005-06-15 00:15:08 +00:00
8563ccae2c Simplify shared-memory lock data structures as per recent discussion:
it is sufficient to track whether a backend holds a lock or not, and
store information about transaction vs. session locks only in the
inside-the-backend LocalLockTable.  Since there can now be but one
PROCLOCK per lock per backend, LockCountMyLocks() is no longer needed,
thus eliminating some O(N^2) behavior when a backend holds many locks.
Also simplify the LockAcquire/LockRelease API by passing just a
'sessionLock' boolean instead of a transaction ID.  The previous API
was designed with the idea that per-transaction lock holding would be
important for subtransactions, but now that we have subtransactions we
know that this is unwanted.  While at it, add an 'isTempObject' parameter
to LockAcquire to indicate whether the lock is being taken on a temp
table.  This is not used just yet, but will be needed shortly for
two-phase commit.
2005-06-14 22:15:33 +00:00
f5835b4b8d Add pg_postmaster_start_time() function.
Euler Taveira de Oliveira
Matthias Schmidt
2005-06-14 21:04:42 +00:00
954f6bcffe Add GUC krb_server_hostname so the server hostname can be specified as
part of service principal.  If not set, any service principal matching
an entry in the keytab can be used.

NEW KERBEROS MATCHING BEHAVIOR FOR 8.1.

Todd Kover
2005-06-14 17:43:14 +00:00
37c839365c WAL for GiST. It work for online backup and so on, but on
recovery after crash (power loss etc) it may say that it can't restore
index and index should be reindexed.

Some refactoring code.
2005-06-14 11:45:14 +00:00
c186c93148 Change the planner to allow indexscan qualification clauses to use
nonconsecutive columns of a multicolumn index, as per discussion around
mid-May (pghackers thread "Best way to scan on-disk bitmaps").  This
turns out to require only minimal changes in btree, and so far as I can
see none at all in GiST.  btcostestimate did need some work, but its
original assumption that index selectivity == heap selectivity was
quite bogus even before this.
2005-06-13 23:14:49 +00:00
a2fb7b8a1f Adjust lo_open() so that specifying INV_READ without INV_WRITE creates
a descriptor that uses the current transaction snapshot, rather than
SnapshotNow as it did before (and still does if INV_WRITE is set).
This means pg_dump will now dump a consistent snapshot of large object
contents, as it never could do before.  Also, add a lo_create() function
that is similar to lo_creat() but allows the desired OID of the large
object to be specified.  This will simplify pg_restore considerably
(but I'll fix that in a separate commit).
2005-06-13 02:26:53 +00:00
2f1210629c Separate predicate-testing code out of indxpath.c, making it a module
in its own right.  As proposed by Simon Riggs, but with some editorializing
of my own.
2005-06-10 22:25:37 +00:00
d46bc444ac Implement two new special variables in PL/PgSQL: SQLSTATE and SQLERRM.
These contain the SQLSTATE and error message of the current exception,
respectively. They are scope-local variables that are only defined
in exception handlers (so attempting to reference them outside an
exception handler is an error). Update the regression tests and the
documentation.

Also, do some minor related cleanup: export an unpack_sql_state()
function from the backend and use it to unpack a SQLSTATE into a
string, and add a free_var() function to pl_exec.c

Original patch from Pavel Stehule, review by Neil Conway.
2005-06-10 16:23:11 +00:00
a87ee007ed Quick hack to allow the outer query's tuple_fraction to be passed down
to a subquery if the outer query is simple enough that the LIMIT can
be reflected directly to the subquery.  This didn't use to be very
interesting, because a subquery that couldn't have been flattened into
the upper query was usually not going to be very responsive to
tuple_fraction anyway.  But with new code that allows UNION ALL subqueries
to pay attention to tuple_fraction, this is useful to do.  In particular
this lets the optimization occur when the UNION ALL is directly inside
a view.
2005-06-10 03:32:25 +00:00
3b167a4099 If a LIMIT is applied to a UNION ALL query, plan each UNION arm as
if the limit were directly applied to it.  This does not actually
add a LIMIT plan node to the generated subqueries --- that would be
useless overhead --- but it does cause the planner to prefer fast-
start plans when the limit is small.  After an idea from Phil Endecott.
2005-06-10 02:21:05 +00:00
532ca3083d Avoid bare 'struct Node;' declaration --- provokes annoying warnings
on some compilers.
2005-06-09 18:44:05 +00:00
4d0e7b4aac Please find attached a patch (diff -c against cvs HEAD) to add a
function that accepts a double precision argument assumed to be a Unix
epoch timestamp and returns timestamp with time zone, and accompanying
documentation.

Usage:

test=# select to_timestamp(200120400);
       to_timestamp
------------------------
  1976-05-05 14:00:00+09
(1 row)

Michael Glaesemann
2005-06-09 16:35:09 +00:00
a31ad27fc5 Simplify the planner's join clause management by storing join clauses
of a relation in a flat 'joininfo' list.  The former arrangement grouped
the join clauses according to the set of unjoined relids used in each;
however, profiling on test cases involving lots of joins proves that
that data structure is a net loss.  It takes more time to group the
join clauses together than is saved by avoiding duplicate tests later.
It doesn't help any that there are usually not more than one or two
clauses per group ...
2005-06-09 04:19:00 +00:00
e3a33a9a9f Marginal hack to avoid spending a lot of time in find_join_rel during
large planning problems: when the list of join rels gets too long, make
an auxiliary hash table that hashes on the identifying Bitmapset.
2005-06-08 23:02:05 +00:00
77c168a836 Remove grammar productions for prefix and postfix % and ^ operators,
as well as the existing pg_catalog entries for prefix and postfix %.
These have never been documented, though they did appear in one old
regression test.  This avoids surprising behavior in cases like
"SELECT -25 % -10".  Per recent discussion.
Note: although there is a catalog change here, I did not force initdb
since there's no harm in leaving the inaccessible entries in one's
copy of pg_operator.
2005-06-08 21:15:29 +00:00
f5b2f60bd1 Change WAL-logging scheme for multixacts to be more like regular
transaction IDs, rather than like subtrans; in particular, the information
now survives a database restart.  Per previous discussion, this is
essential for PITR log shipping and for 2PC.
2005-06-08 15:50:28 +00:00
657c098e41 Add a function lastval(), which returns the value returned by the
last nextval() or setval() performed by the current session. Update the
docs, add regression tests, and bump the catalog version. Patch from
Dennis Björklund, various improvements by Neil Conway.
2005-06-07 07:08:35 +00:00
ee7ac7b11e Modify XLogInsert API to make callers specify whether pages to be backed
up have the standard layout with unused space between pd_lower and pd_upper.
When this is set, XLogInsert will omit the unused space without bothering
to scan it to see if it's zero.  That saves time in XLogInsert, and also
allows reversion of my earlier patch to make PageRepairFragmentation et al
explicitly re-zero freed space.  Per suggestion by Heikki Linnakangas.
2005-06-06 20:22:58 +00:00
4c8495a1f2 Remove the mostly-stubbed-out-anyway support routines for WAL UNDO.
That code is never going to be used in the foreseeable future, and
where it's more than a stub it's making the redo routines harder to
read.
2005-06-06 17:01:25 +00:00
9a586fe0c5 Nab some low-hanging fruit: replace the planner's base_rel_list and
other_rel_list with a single array indexed by rangetable index.
This reduces find_base_rel from O(N) to O(1) without any real penalty.
While find_base_rel isn't one of the major bottlenecks in any profile
I've seen so far, it was starting to creep up on the radar screen
for complex queries --- so might as well fix it.
2005-06-06 04:13:36 +00:00
9ab4d98168 Remove planner's private fields from Query struct, and put them into
a new PlannerInfo struct, which is passed around instead of the bare
Query in all the planning code.  This commit is essentially just a
code-beautification exercise, but it does open the door to making
larger changes to the planner data structures without having to muck
with the widely-known Query struct.
2005-06-05 22:32:58 +00:00
a4996a8953 Replace the parser's namespace tree (which formerly had the same
representation as the jointree) with two lists of RTEs, one showing
the RTEs accessible by qualified names, and the other showing the RTEs
accessible by unqualified names.  I think this is conceptually simpler
than what we did before, and it's sure a whole lot easier to search.
This seems to eliminate the parse-time bottleneck for deeply nested
JOIN structures that was exhibited by phil@vodafone.
2005-06-05 00:38:11 +00:00
72c53ac3a7 Allow kerberos name and username case sensitivity to be specified from
postgresql.conf.

---------------------------------------------------------------------------


Here's an updated version of the patch, with the following changes:

1) No longer uses "service name" as "application version". It's instead
hardcoded as "postgres". It could be argued that this part should be
backpatched to 8.0, but it doesn't make a big difference until you can
start changing it with GUC / connection parameters. This change only
affects kerberos 5, not 4.

2) Now downcases kerberos usernames when the client is running on win32.

3) Adds guc option for "krb_caseins_users" to make the server ignore
case mismatch which is required by some KDCs such as Active Directory.
Off by default, per discussion with Tom. This change only affects
kerberos 5, not 4.

4) Updated so it doesn't conflict with the rendevouz/bonjour patch
already in ;-)

Magnus Hagander
2005-06-04 20:42:43 +00:00
e18e8f8735 Change expandRTE() and ResolveNew() back to taking just the single
RTE of interest, rather than the whole rangetable list.  This makes
the API more understandable and avoids duplicate RTE lookups.  This
patch reverts no-longer-needed portions of my patch of 2004-08-19.
2005-06-04 19:19:42 +00:00
ba42002461 Revise handling of dropped columns in JOIN alias lists to avoid a
performance problem pointed out by phil@vodafone: to wit, we were
spending O(N^2) time to check dropped-ness in an N-deep join tree,
even in the case where the tree was freshly constructed and couldn't
possibly mention any dropped columns.  Instead of recursing in
get_rte_attribute_is_dropped(), change the data structure definition:
the joinaliasvars list of a JOIN RTE must have a NULL Const instead
of a Var at any position that references a now-dropped column.  This
costs nothing during normal parse-rewrite-plan path, and instead we
have a linear-time update to make when loading a stored rule that
might contain now-dropped columns.  While at it, move the responsibility
for acquring locks on relations referenced by rules into this separate
function (which I therefore chose to call AcquireRewriteLocks).
This saves effort --- namely, duplicated lock grabs in parser and rewriter
--- in the normal path at a cost of one extra non-locked heap_open()
in the stored-rule path; seems a good tradeoff.  A fringe benefit is
that it is now *much* clearer that we acquire lock on relations referenced
in rules before we make any rewriter decisions based on their properties.
(I don't know of any bug of that ilk, but it wasn't exactly clear before.)
2005-06-03 23:05:30 +00:00
b5ebef7c41 Push enable/disable of notify and catchup interrupts all the way down
to just around the bare recv() call that gets a command from the client.
The former placement in PostgresMain was unsafe because the intermediate
processing layers (especially SSL) use facilities such as malloc that are
not necessarily re-entrant.  Per report from counterstorm.com.
2005-06-02 21:03:25 +00:00
21fda22ec4 Change CRCs in WAL records from 64bit to 32bit for performance reasons.
Instead of a separate CRC on each backup block, include backup blocks
in their parent WAL record's CRC; this is important to ensure that the
backup block really goes with the WAL record, ie there was not a page
tear right at the start of the backup block.  Implement a simple form
of compression of backup blocks: drop any run of zeroes starting at
pd_lower, so as not to store the unused 'hole' that commonly exists in
PG heap and index pages.  Tweak PageRepairFragmentation and related
routines to ensure they keep the unused space zeroed, so that the above
compression method remains effective.  All per recent discussions.
2005-06-02 05:55:29 +00:00
83b72ee286 ParseComplexProjection should make use of expandRecordVariable so that
it can handle cases like (foo.x).y where foo is a subquery and x is
a function-returning-RECORD RTE in that subquery.
2005-05-31 01:03:23 +00:00
978129f28e Document get_call_result_type() and friends; mark TypeGetTupleDesc()
and RelationNameGetTupleDesc() as deprecated; remove uses of the
latter in the contrib library.  Along the way, clean up crosstab()
code and documentation a little.
2005-05-30 23:09:07 +00:00
25146d3c29 Add support for NUMERIC ^ NUMERIC based on power(numeric, numeric). 2005-05-30 20:59:17 +00:00
adfeef55cb When enqueueing after-row triggers for updates of a table with a foreign
key, compare the new and old row versions. If the foreign key column has
not changed, we needn't enqueue the trigger, since the update cannot
violate the foreign key. This optimization was previously applied in the
RI trigger function, but it is more efficient to avoid firing the trigger
altogether. Per recent discussion on pgsql-hackers.

Also add a regression test for some unintuitive foreign key behavior, and
refactor some code that deals with the OIDs of the various RI trigger
functions.
2005-05-30 07:20:59 +00:00
f99b75b0a0 Create separate ON INSERT and ON UPDATE triggers on tables with foreign
keys, rather than a single trigger for both events. This should not change
functionality, but it is more consistent: previously, there were trigger
functions for both "check_insert" and "check_update", but the former was
used for both events.

Bump catalog version number (not strictly necessary, but best to be
cautious).
2005-05-30 06:52:38 +00:00
cfd9be939e Change the UNKNOWN type to have an internal representation matching
cstring, rather than text, so as to eliminate useless conversions
inside the parser.  Per recent discussion.
2005-05-30 01:20:50 +00:00
140b078d2a Improve LockAcquire API per my recent proposal. All error conditions
are now reported via elog, eliminating the need to test the result code
at most call sites.  Make it possible for the caller to distinguish a
freshly acquired lock from one already held in the current transaction.
Use that capability to avoid redundant AcceptInvalidationMessages() calls
in LockRelation().
2005-05-29 22:45:02 +00:00
d66daabec9 Remove typeidIsValid() checks in can_coerce_type(). These checks
were pretty expensive and I believe the case they were put in to
defend against can no longer arise, now that we have dependency checks
to prevent deletion of a type entry that is still referenced.  Certainly
the example given in the CVS log entry can't happen anymore.
Since this was the only use of typeidIsValid(), remove the routine too.
2005-05-29 18:24:14 +00:00
e92a88272e Modify hash_search() API to prevent future occurrences of the error
spotted by Qingqing Zhou.  The HASH_ENTER action now automatically
fails with elog(ERROR) on out-of-memory --- which incidentally lets
us eliminate duplicate error checks in quite a bunch of places.  If
you really need the old return-NULL-on-out-of-memory behavior, you
can ask for HASH_ENTER_NULL.  But there is now an Assert in that path
checking that you aren't hoping to get that behavior in a palloc-based
hash table.
Along the way, remove the old HASH_FIND_SAVE/HASH_REMOVE_SAVED actions,
which were not being used anywhere anymore, and were surely too ugly
and unsafe to want to see revived again.
2005-05-29 04:23:07 +00:00
32e8fc4a28 Arrange to cache fmgr lookup information for an index's access method
routines in the index's relcache entry, instead of doing a fresh fmgr_info
on every index access.  We were already doing this for the index's opclass
support functions; not sure why we didn't think to do it for the AM
functions too.  This supersedes the former method of caching (only)
amgettuple in indexscan scan descriptors; it's an improvement because the
function lookup can be amortized across multiple statements instead of
being repeated for each statement.  Even though lookup for builtin
functions is pretty cheap, this seems to drop a percent or two off some
simple benchmarks.
2005-05-27 23:31:21 +00:00
a4374f9070 Remove second argument from textToQualifiedNameList(), as it is no longer
used. From Jaime Casanova.
2005-05-27 00:57:49 +00:00
63e0d612f5 Adjust datetime parsing to be more robust. We now pass the length of the
working buffer into ParseDateTime() and reject too-long input there,
rather than checking the length of the input string before calling
ParseDateTime(). The old method was bogus because ParseDateTime() can use
a variable amount of working space, depending on the content of the
input string (e.g. how many fields need to be NUL terminated). This fixes
a minor stack overrun -- I don't _think_ it's exploitable, although I
won't claim to be an expert.

Along the way, fix a bug reported by Mark Dilger: the working buffer
allocated by interval_in() was too short, which resulted in rejecting
some perfectly valid interval input values. I added a regression test for
this fix.
2005-05-26 02:04:14 +00:00
b492c3accc Add parentheses to macros when args are used in computations. Without
them, the executation behavior could be unexpected.
2005-05-25 21:40:43 +00:00
f534820d4d Put parentheses around use of macro arguments in FMODULO and TMODULO. 2005-05-24 04:03:01 +00:00
4550c1e519 More macro cleanups for date/time. 2005-05-23 21:54:02 +00:00
5ebaae801c Add datetime macros for constants, for clarity:
#define SECS_PER_DAY  86400
#define USECS_PER_DAY INT64CONST(86400000000)
#define USECS_PER_HOUR    INT64CONST(3600000000)
#define USECS_PER_MINUTE INT64CONST(60000000)
#define USECS_PER_SEC INT64CONST(1000000)
2005-05-23 18:56:55 +00:00
e2159f3842 Teach the planner to remove SubqueryScan nodes from the plan if they
aren't doing anything useful (ie, neither selection nor projection).
Also, extend to SubqueryScan the hacks already in place to avoid
unnecessary ExecProject calls when the result would just be the same
tuple the subquery already delivered.  This saves some overhead in
UNION and other set operations, as well as avoiding overhead for
unflatten-able subqueries.  Per example from Sokolov Yura.
2005-05-22 22:30:20 +00:00
6dc7760ac3 Add support for wal_fsync_writethrough for Darwin, and restructure the
code to better handle writethrough.

Chris Campbell
2005-05-20 14:53:26 +00:00