Commit Graph

88 Commits

Author SHA1 Message Date
bec96c82f8 Dump sequence data based on the TableDataInfo flag
When considering a sequence's Data entry in dumpSequenceData, we were
actually looking at the sequence definition's dump flag to decide if we
should dump the data or not.  That's generally fine, except for when the
sequence data entry was created by processExtensionTables() because it's
a config sequence.  In that case, the sequence itself won't be marked as
dumping data because it's part of an extension, leading to the need for
processExtensionTables() to create the sequence data entry.

This leads to extension config sequence data not being included in the
dump when it should be.  Fix this by looking at the sequence data's dump
flag instead, just as dumpTableData() was doing for tables (which is why
config tables were correctly being handled), and add a regression test
to make sure we don't break it moving forward.

All of this is a bit round-about since we can now represent which
components of a given dump item should be dumped out through the dump
flag.  A future improvement might be to change checkExtensionMembership()
to check for config sequences/tables and set the dump flag based on that
directly, possibly removing the need for processExtensionTables().

Bug found by Daniele Varrazzo.

Discussion: https://postgr.es/m/CA+mi_8ZmxQM7+nZ7pJ8uyfxc9V3o=UAG14dVqvftdmvw8OJ3gQ@mail.gmail.com

Patch by Michael Paquier, with some tweaking of the regression tests by
me.

Back-patch to 9.6 where the bug was introduced.
2017-01-19 12:06:21 -05:00
215b43cdc8 Improve RLS planning by marking individual quals with security levels.
In an RLS query, we must ensure that security filter quals are evaluated
before ordinary query quals, in case the latter contain "leaky" functions
that could expose the contents of sensitive rows.  The original
implementation of RLS planning ensured this by pushing the scan of a
secured table into a sub-query that it marked as a security-barrier view.
Unfortunately this results in very inefficient plans in many cases, because
the sub-query cannot be flattened and gets planned independently of the
rest of the query.

To fix, drop the use of sub-queries to enforce RLS qual order, and instead
mark each qual (RestrictInfo) with a security_level field establishing its
priority for evaluation.  Quals must be evaluated in security_level order,
except that "leakproof" quals can be allowed to go ahead of quals of lower
security_level, if it's helpful to do so.  This has to be enforced within
the ordering of any one list of quals to be evaluated at a table scan node,
and we also have to ensure that quals are not chosen for early evaluation
(i.e., use as an index qual or TID scan qual) if they're not allowed to go
ahead of other quals at the scan node.

This is sufficient to fix the problem for RLS quals, since we only support
RLS policies on simple tables and thus RLS quals will always exist at the
table scan level only.  Eventually these qual ordering rules should be
enforced for join quals as well, which would permit improving planning for
explicit security-barrier views; but that's a task for another patch.

Note that FDWs would need to be aware of these rules --- and not, for
example, send an insecure qual for remote execution --- but since we do
not yet allow RLS policies on foreign tables, the case doesn't arise.
This will need to be addressed before we can allow such policies.

Patch by me, reviewed by Stephen Frost and Dean Rasheed.

Discussion: https://postgr.es/m/8185.1477432701@sss.pgh.pa.us
2017-01-18 12:58:20 -05:00
f6d6d2920d Change default values for backup and replication parameters
This changes the default values of the following parameters:

wal_level = replica
max_wal_senders = 10
max_replication_slots = 10

in order to make it possible to make a backup and set up simple
replication on the default settings, without requiring a system restart.

Discussion: https://postgr.es/m/CABUevEy4PR_EAvZEzsbF5s+V0eEvw7shJ2t-AUwbHOjT+yRb3A@mail.gmail.com

Reviewed by Peter Eisentraut. Benchmark help from Tomas Vondra.
2017-01-14 17:14:56 +01:00
05cd12ed5b pg_ctl: Change default to wait for all actions
The different actions in pg_ctl had different defaults for -w and -W,
mostly for historical reasons.  Most users will want the -w behavior, so
make that the default.

Remove the -w option in most example and test code, so avoid confusion
and reduce verbosity.  pg_upgrade is not touched, so it can continue to
work with older installations.

Reviewed-by: Beena Emerson <memissemerson@gmail.com>
Reviewed-by: Ryan Murphy <ryanfmurphy@gmail.com>
2017-01-14 09:15:08 -05:00
1d25779284 Update copyright via script for 2017 2017-01-03 13:48:53 -05:00
12bd7dd317 Use TSConfigRelationId in AlterTSConfiguration()
When we are altering a text search configuration, we are getting the
tuple from pg_ts_config and using its OID, so use TSConfigRelationId
when invoking any post-alter hooks and setting the object address.

Further, in the functions called from AlterTSConfiguration(), we're
saving information about the command via
EventTriggerCollectAlterTSConfig(), so we should be setting
commandCollected to true.  Also add a regression test to
test_ddl_deparse for ALTER TEXT SEARCH CONFIGURATION.

Author: Artur Zakirov, a few additional comments by me
Discussion: https://www.postgresql.org/message-id/57a71eba-f2c7-e7fd-6fc0-2126ec0b39bd%40postgrespro.ru

Back-patch the fix for the InvokeObjectPostAlterHook() call to 9.3 where
it was introduced, and the fix for the ObjectAddressSet() call and
setting commandCollected to true to 9.5 where those changes to
ProcessUtilitySlow() were introduced.
2016-12-22 17:08:43 -05:00
19fcc0058e Fix broken wait-for-previous-process-to-exit loop in regression test.
Must do pg_stat_clear_snapshot() inside test's loop, or our snapshot of
pg_stat_activity will never change :-(.  Thinko in b3427dade -- evidently
my workstation never really iterated the loop in testing.  Per buildfarm.
2016-12-02 17:23:54 -05:00
b3427dade1 Delete deleteWhatDependsOn() in favor of more performDeletion() flag bits.
deleteWhatDependsOn() had grown an uncomfortably large number of
assumptions about what it's used for.  There are actually only two minor
differences between what it does and what a regular performDeletion() call
can do, so let's invent additional bits in performDeletion's existing flags
argument that specify those behaviors, and get rid of deleteWhatDependsOn()
as such.  (We'd probably have done it this way from the start, except that
performDeletion didn't originally have a flags argument, IIRC.)

Also, add a SKIP_EXTENSIONS flag bit that prevents ever recursing to an
extension, and use that when dropping temporary objects at session end.
This provides a more general solution to the problem addressed in a hacky
way in commit 08dd23cec: if an extension script creates temp objects and
forgets to remove them again, the whole extension went away when its
contained temp objects were deleted.  The previous solution only covered
temp relations, but this solves it for all object types.

These changes require minor additions in dependency.c to pass the flags
to subroutines that previously didn't get them, but it's still a net
savings of code, and it seems cleaner than before.

Having done this, revert the special-case code added in 08dd23cec that
prevented addition of pg_depend records for temp table extension
membership, because that caused its own oddities: dropping an extension
that had created such a table didn't automatically remove the table,
leading to a failure if the table had another dependency on the extension
(such as use of an extension data type), or to a duplicate-name failure if
you then tried to recreate the extension.  But we keep the part that
prevents the pg_temp_nnn schema from becoming an extension member; we never
want that to happen.  Add a regression test case covering these behaviors.

Although this fixes some arguable bugs, we've heard few field complaints,
and any such problems are easily worked around by explicitly dropping temp
objects at the end of extension scripts (which seems like good practice
anyway).  So I won't risk a back-patch.

Discussion: https://postgr.es/m/e51f4311-f483-4dd0-1ccc-abec3c405110@BlueTreble.com
2016-12-02 14:57:55 -05:00
182db07040 Fix test about ignoring extension dependencies during extension scripts.
Commit 08dd23cec introduced an exception to the rule that extension member
objects can only be dropped as part of dropping the whole extension,
intending to allow such drops while running the extension's own creation or
update scripts.  However, the exception was only applied at the outermost
recursion level, because it was modeled on a pre-existing check to ignore
dependencies on objects listed in pendingObjects.  Bug #14434 from Philippe
Beaudoin shows that this is inadequate: in some cases we can reach an
extension member object by recursion from another one.  (The bug concerns
the serial-sequence case; I'm not sure if there are other cases, but there
might well be.)

To fix, revert 08dd23cec's changes to findDependentObjects() and instead
apply the creating_extension exception regardless of stack level.

Having seen this example, I'm a bit suspicious that the pendingObjects
logic is also wrong and such cases should likewise be allowed at any
recursion level.  However, changing that would interact in subtle ways
with the recursion logic (at least it would need to be moved to after the
recursing-from check).  Given that the code's been like that a long time,
I'll refrain from touching it without a clear example showing it's wrong.

Back-patch to all active branches.  In HEAD and 9.6, where suitable
test infrastructure exists, add a regression test case based on the
bug report.

Report: <20161125151448.6529.33039@wrigleys.postgresql.org>
Discussion: <13224.1480177514@sss.pgh.pa.us>
2016-11-26 13:31:35 -05:00
4aaddf2f00 Fix commit_ts for FrozenXid and BootstrapXid
Previously, requesting commit timestamp for transactions
FrozenTransactionId and BootstrapTransactionId resulted in an error.
But since those values can validly appear in committed tuples' Xmin,
this behavior is unhelpful and error prone: each caller would have to
special-case those values before requesting timestamp data for an Xid.
We already have a perfectly good interface for returning "the Xid you
requested is too old for us to have commit TS data for it", so let's use
that instead.

Backpatch to 9.5, where commit timestamps appeared.

Author: Craig Ringer
Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
2016-11-24 15:39:55 -03:00
650b967076 Change qr/foo$/m to qr/foo\n/m, for Perl 5.8.8.
In each case, absence of a trailing newline would itself constitute a
PostgreSQL bug.  Therefore, this slightly enhances the changed tests.
This works around a bug that last appeared in Perl 5.8.8, fixing
src/test/modules/test_pg_dump when run against that version.  Commit
e7293e3271bf618eeb2d4779a15fc516a69fe463 worked around the bug, but the
subsequent addition of test_pg_dump introduced affected code.  As that
commit had shown, slight increases in pattern complexity can suppress
the bug.  This commit edits qr/foo$/m patterns too complex to encounter
the bug today, for style consistency and robustness against unrelated
pattern changes.  Back-patch to 9.6, where test_pg_dump was introduced.

As of this writing, a fresh MSYS installation includes an affected Perl
5.8.8.  The Perl 5.8.8 in Red Hat Enterprise Linux 5.11 carries a patch
that renders it unaffected, but the Perl 5.8.5 of Red Hat Enterprise
Linux 4.4 is affected.
2016-11-07 20:27:30 -05:00
00f15338b2 Preserve commit timestamps across clean restart
An oversight in setting the boundaries of known commit timestamps during
startup caused old commit timestamps to become inaccessible after a
server restart.

Author and reporter: Julien Rouhaud
Review, test code: Craig Ringer
2016-10-24 09:45:48 -03:00
d2ce38e204 Rename WAIT_* constants to PG_WAIT_*.
Windows apparently has a constant named WAIT_TIMEOUT, and some of these
other names are pretty generic, too.  Insert "PG_" at the front of each
name in order to disambiguate.

Michael Paquier
2016-10-05 08:04:52 -04:00
976a1ce910 Adjust worker_spi for 6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa. 2016-10-04 11:18:43 -04:00
6f3bd98ebf Extend framework from commit 53be0b1ad to report latch waits.
WaitLatch, WaitLatchOrSocket, and WaitEventSetWait now taken an
additional wait_event_info parameter; legal values are defined in
pgstat.h.  This makes it possible to uniquely identify every point in
the core code where we are waiting for a latch; extensions can pass
WAIT_EXTENSION.

Because latches were the major wait primitive not previously covered
by this patch, it is now possible to see information in
pg_stat_activity on a large number of important wait events not
previously addressed, such as ClientRead, ClientWrite, and SyncRep.

Unfortunately, many of the wait events added by this patch will fail
to appear in pg_stat_activity because they're only used in background
processes which don't currently appear in pg_stat_activity.  We should
fix this either by creating a separate view for such information, or
else by deciding to include them in pg_stat_activity after all.

Michael Paquier and Robert Haas, reviewed by Alexander Korotkov and
Thomas Munro.
2016-10-04 11:01:42 -04:00
4929704acb worker_spi: Call pgstat_report_stat.
Without this, statistics changes accumulated by the worker never get
reported to the stats collector, which is bad.

Julien Rouhaud
2016-09-28 12:42:48 -04:00
19acee8c5a Fix two src/test/modules Makefiles
commit_ts and test_pg_dump were declaring targets before including the
PGXS stanza, which meant that the "all" target customarily defined as
the first (and therefore default target) was not the default anymore.
Fix that by moving those target definitions to after PGXS.

commit_ts was initially good, but I broke it in commit 9def031bd2;
test_pg_dump was born broken, probably copying from commit_ts' mistake.

In passing, fix a comment mistake in test_pg_dump/Makefile.

Backpatch to 9.6.

Noted by Tom Lane.
2016-09-08 14:39:05 -03:00
df5d9bb8d5 Allow pg_dump to dump non-extension members of an extension-owned schema.
Previously, if a schema was created by an extension, a normal pg_dump run
(not --binary-upgrade) would summarily skip every object in that schema.
In a case where an extension creates a schema and then users create other
objects within that schema, this does the wrong thing: we want pg_dump
to skip the schema but still create the non-extension-owned objects.

There's no easy way to fix this pre-9.6, because in earlier versions the
"dump" status for a schema is just a bool and there's no way to distinguish
"dump me" from "dump my members".  However, as of 9.6 we do have enough
state to represent that, so this is a simple correction of the logic in
selectDumpableNamespace.

In passing, make some cosmetic fixes in nearby code.

Martín Marqués, reviewed by Michael Paquier

Discussion: <99581032-71de-6466-c325-069861f1947d@2ndquadrant.com>
2016-09-08 13:12:01 -04:00
b5bce6c1ec Final pgindent + perltidy run for 9.6. 2016-08-15 13:42:51 -04:00
887feefe87 Don't CHECK_FOR_INTERRUPTS between WaitLatch and ResetLatch.
This coding pattern creates a race condition, because if an interesting
interrupt happens after we've checked InterruptPending but before we reset
our latch, the latch-setting done by the signal handler would get lost,
and then we might block at WaitLatch in the next iteration without ever
noticing the interrupt condition.  You can put the CHECK_FOR_INTERRUPTS
before WaitLatch or after ResetLatch, but not between them.

Aside from fixing the bugs, add some explanatory comments to latch.h
to perhaps forestall the next person from making the same mistake.

In HEAD, also replace gather_readnext's direct call of
HandleParallelMessages with CHECK_FOR_INTERRUPTS.  It does not seem clean
or useful for this one caller to bypass ProcessInterrupts and go straight
to HandleParallelMessages; not least because that fails to consider the
InterruptPending flag, resulting in useless work both here
(if InterruptPending isn't set) and in the next CHECK_FOR_INTERRUPTS call
(if it is).

This thinko seems to have been introduced in the initial coding of
storage/ipc/shm_mq.c (commit ec9037df2), and then blindly copied into all
the subsequent parallel-query support logic.  Back-patch relevant hunks
to 9.4 to extirpate the error everywhere.

Discussion: <1661.1469996911@sss.pgh.pa.us>
2016-08-01 15:13:53 -04:00
f9e439b1ca Correctly handle owned sequences with extensions
With the refactoring of pg_dump to handle components, getOwnedSeqs needs
to be a bit more intelligent regarding which components to dump when.
Specifically, we can't simply use the owning table's components as the
set of components to dump as the table might only be including certain
components while all components of the sequence should be dumped, for
example, when the table is a member of an extension while the sequence
is not.

Handle this by combining the set of components to be dumped for the
sequence explicitly and those to be dumped for the table when setting
the components to be dumped for the sequence.

Also add a number of regression tests around this to, hopefully, catch
any future changes which break the expected behavior.

Discovered by: Philippe BEAUDOIN
Reviewed by: Michael Paquier
2016-07-31 10:57:15 -04:00
40fcfec82c Message style improvements 2016-07-25 22:07:44 -04:00
18555b1323 Establish conventions about global object names used in regression tests.
To ensure that "make installcheck" can be used safely against an existing
installation, we need to be careful about what global object names
(database, role, and tablespace names) we use; otherwise we might
accidentally clobber important objects.  There's been a weak consensus that
test databases should have names including "regression", and that test role
names should start with "regress_", but we didn't have any particular rule
about tablespace names; and neither of the other rules was followed with
any consistency either.

This commit moves us a long way towards having a hard-and-fast rule that
regression test databases must have names including "regression", and that
test role and tablespace names must start with "regress_".  It's not
completely there because I did not touch some test cases in rolenames.sql
that test creation of special role names like "session_user".  That will
require some rethinking of exactly what we want to test, whereas the intent
of this patch is just to hit all the cases in which the needed renamings
are cosmetic.

There is no enforcement mechanism in this patch either, but if we don't
add one we can expect that the tests will soon be violating the convention
again.  Again, that's not such a cosmetic change and it will require
discussion.  (But I did use a quick-hack enforcement patch to find these
cases.)

Discussion: <16638.1468620817@sss.pgh.pa.us>
2016-07-17 18:42:43 -04:00
548af97fce Provide and use a makefile target to build all generated headers.
As of 9.6, pg_regress doesn't build unless storage/lwlocknames.h has been
created; but there was nothing forcing that to happen if you just went into
src/test/regress/ and built there.  We previously had a similar complaint
about plpython.

To fix in a way that won't break next time we invent a generated header,
make src/backend/Makefile expose a phony target for updating all the
include files it builds, and invoke that before building pg_regress or
plpython.  In principle, maybe we ought to invoke that everywhere; but
it would add a lot of usually-useless make cycles, so let's just do it
in the places where people have complained.

I made a couple of cosmetic adjustments in src/backend/Makefile as well,
to deal with the generated headers in consistent orders.

Michael Paquier and Tom Lane

Report: <31398.1467036827@sss.pgh.pa.us>
Report: <20150916200959.GB32090@msg.df7cb.de>
2016-07-01 15:09:02 -04:00
3be0a62ffe Finish pgindent run for 9.6: Perl files. 2016-06-12 04:19:56 -04:00
4bc424b968 pgindent run for 9.6 2016-06-09 18:02:36 -04:00
562f06f3f0 pg_dump only selected components of ACCESS METHODs
dumpAccessMethod() didn't get the memo that we now have a bitfield for
the components which should be dumped instead of a simple boolean.

Correct that by checking if the relevant bit is set for each component
being dumped out (and not dumping it out if it isn't set).

This corrects an issue where CREATE ACCESS METHOD commands were being
included in non-binary-upgrades when an extension included an access
method (as the bloom extensions does).

Also add a regression test to make sure that we only dump out the
ACCESS METHOD commands, when they are part of an extension, when doing
a binary upgrade.

Pointed out by Thom Brown.
2016-06-07 09:56:02 -04:00
e1623c3959 Fix various common mispellings.
Mostly these are just comments but there are a few in documentation
and a handful in code and tests. Hopefully this doesn't cause too much
unnecessary pain for backpatching. I relented from some of the most
common like "thru" for that reason. The rest don't seem numerous
enough to cause problems.

Thanks to Kevin Lyda's tool https://pypi.python.org/pypi/misspellings
2016-06-03 16:08:45 +01:00
2e8b4bf804 Qualify table usage in dumpTable() and use regclass
All of the other tables used in the query in dumpTable(), which is
collecting column-level ACLs, are qualified, so we should be qualifying
the pg_init_privs, the related sub-select against pg_class and the
other queries added by the pg_dump catalog ACLs work.

Also, use ::regclass (or ::pg_catalog.regclass, where appropriate)
instead of using a poorly constructed query to get the OID for various
catalog tables.

Issues identified by Noah and Alvaro, patch by me.
2016-05-24 20:10:16 -04:00
6e243c43c9 Add test_pg_dump to @contrib_excludes
The test_pg_dump extension doesn't have a C component, so we need
to exclude it from the MSVC build system trying to figure out how
to build it.

Also add a "MODULES" line to the Makefile, as test_extensions has.
Might not be necessary, but seems good to keep things consistent.

Lastly, remove the 'installcheck' line from test_pg_dump, as that
was causing redefinition errors, at least on my box.  This also
makes test_pg_dump consistent with how commit_ts is set up.
2016-05-06 16:39:56 -04:00
eccfeeb631 Remove MODULES_big from test_pg_dump
The Makefile for test_pg_dump shouldn't have a MODULES_big line
because there's no actual compiled bit for that extension.  Hopefully
this will fix the Windows buildfarm members which were complaining.

In passing, also add the 'prove_installcheck' bit to the pg_dump and
test_pg_dump Makefiles, to get the buildfarm members to actually run
those tests.
2016-05-06 15:26:57 -04:00
6bd356c33a Add TAP tests for pg_dump
This TAP test suite will create a new cluster, populate it based on
the 'create_sql' values in the '%tests' hash, run all of the runs
defined in the '%pgdump_runs' hash, and then for each test in the
'%tests' hash, compare each run's output the the regular expression
defined for the test under the 'like' and 'unlike' functions, as
appropriate.

While this test suite covers a fair bit of ground (67% of pg_dump.c
and quite a bit of the other files in src/bin/pg_dump), there is
still quite a bit which remains to be added to provide better code
coverage.  Still, this is quite a bit better than we had, and has
found a few bugs already (note that the CREATE TRANSFORM test is
commented out, as it is currently failing).

Idea for using the TAP system from Tom, though all of the code is mine.
2016-05-06 14:06:50 -04:00
c1543a81a7 Revert timeline following in replication slots
This reverts commits f07d18b6e94d, 82c83b337202, 3a3b309041b0, and
24c5f1a103ce.

This feature has shown enough immaturity that it was deemed better to
rip it out before rushing some more fixes at the last minute.  There are
discussions on larger changes in this area for the next release.
2016-05-04 17:32:22 -03:00
99f2f3c19a In recordExtensionInitPriv(), keep the scan til we're done with it
For reasons of sheer brain fade, we (I) was calling systable_endscan()
immediately after systable_getnext() and expecting the tuple returned
by systable_getnext() to still be valid.

That's clearly wrong.  Move the systable_endscan() down below the tuple
usage.

Discovered initially by Pavel Stehule and then also by Alvaro.

Add a regression test based on Alvaro's testing.
2016-04-15 21:57:15 -04:00
d8ed83cd7f Fix whitespace 2016-04-11 14:44:51 -04:00
29ca231b83 Add directory created during build to gitignore 2016-04-11 14:09:38 -04:00
848ef42bb8 Add the "snapshot too old" feature
This feature is controlled by a new old_snapshot_threshold GUC.  A
value of -1 disables the feature, and that is the default.  The
value of 0 is just intended for testing.  Above that it is the
number of minutes a snapshot can reach before pruning and vacuum
are allowed to remove dead tuples which the snapshot would
otherwise protect.  The xmin associated with a transaction ID does
still protect dead tuples.  A connection which is using an "old"
snapshot does not get an error unless it accesses a page modified
recently enough that it might not be able to produce accurate
results.

This is similar to the Oracle feature, and we use the same SQLSTATE
and error message for compatibility.
2016-04-08 14:36:30 -05:00
f2fcad27d5 Support ALTER THING .. DEPENDS ON EXTENSION
This introduces a new dependency type which marks an object as depending
on an extension, such that if the extension is dropped, the object
automatically goes away; and also, if the database is dumped, the object
is included in the dump output.  Currently the grammar supports this for
indexes, triggers, materialized views and functions only, although the
utility code is generic so adding support for more object types is a
matter of touching the parser rules only.

Author: Abhijit Menon-Sen
Reviewed-by: Alexander Korotkov, Álvaro Herrera
Discussion: http://www.postgresql.org/message-id/20160115062649.GA5068@toroid.org
2016-04-05 18:38:54 -03:00
f07d18b6e9 test_slot_timelines: Fix alternate expected output 2016-04-01 18:36:07 -03:00
82c83b3372 Fix logical_decoding_timelines test crashes
In the test_slot_timelines test module, we were abusing passing NULL
values which was received as zeroes in x86, but this breaks in ARM
(buildfarm member hamster) by crashing instead.  Fix the breakage by
marking these functions as STRICT; the InvalidXid value that was
previously implicit in NULL values (on x86 at least) can now be passed
as 0.  Failing to follow the fmgr protocol to check for NULLs beforehand
was causing ARM to fail, as evidenced by segmentation faults in
buildfarm member hamster.

In order to use the new functionality in the test script, use COALESCE
in the right spot to avoid forwarding NULL values.

This was diagnosed from the hamster crash by Craig Ringer, who also
proposed a different patch (checking for NULL values explicitely in the
C function code, and keeping the non-strictness in the C functions).
I decided to go with this approach instead.
2016-04-01 16:47:00 -03:00
3a3b309041 I forgot the alternate expected file in previous commit
Without this, the test_slot_timelines modules fails "make installcheck"
because the required feature is not enabled in a stock server.

Per buildfarm
2016-03-30 20:48:24 -03:00
24c5f1a103 Enable logical slots to follow timeline switches
When decoding from a logical slot, it's necessary for xlog reading to be
able to read xlog from historical (i.e. not current) timelines;
otherwise, decoding fails after failover, because the archives are in
the historical timeline.  This is required to make "failover logical
slots" possible; it currently has no other use, although theoretically
it could be used by an extension that creates a slot on a standby and
continues to replay from the slot when the standby is promoted.

This commit includes a module in src/test/modules with functions to
manipulate the slots (which is not otherwise possible in SQL code) in
order to enable testing, and a new test in src/test/recovery to ensure
that the behavior is as expected.

Author: Craig Ringer
Reviewed-By: Oleksii Kliukin, Andres Freund, Petr Jelínek
2016-03-30 20:07:05 -03:00
d5351fcb03 Fix phony .PHONY.
A couple makefiles had misspelled the magic .PHONY target as PHONY.
2016-03-19 17:19:37 -04:00
2c83f435a3 Rework PostgresNode's psql method
This makes the psql() method much more capable: it captures both stdout
and stderr; it now returns the psql exit code rather than stdout; a
timeout can now be specified, as can ON_ERROR_STOP behavior; it gained a
new "on_error_die" (defaulting to off) parameter to raise an exception
if there's any problem.  Finally, additional parameters to psql can be
passed if there's need for further tweaking.

For convenience, a new safe_psql() method retains much of the old
behavior of psql(), except that it uses on_error_die on, so that
problems like syntax errors in SQL commands can be detected more easily.

Many existing TAP test files now use safe_psql, which is what is really
wanted.  A couple of ->psql() calls are now added in the commit_ts
tests, which verify that the right thing is happening on certain errors.
Some ->command_fails() calls in recovery tests that were verifying that
psql failed also became ->psql() calls now.

Author: Craig Ringer. Some tweaks by Álvaro Herrera
Reviewed-By: Michaël Paquier
2016-03-03 17:58:30 -03:00
9def031bd2 Add more tests for commit_timestamp feature
These tests verify that 1) WAL replay preserves the stored value,
2) a streaming standby server replays the value obtained from the
master, and 3) the behavior is sensible in the face of repeated
configuration changes.

One annoyance is that tmp_check/ subdir from the TAP tests is clobbered
when the pg_regress test runs in the same subdirectory.  This is
bothersome but not too terrible a problem, since the pg_regress test is
not run anyway if the TAP tests fail (unless "make -k" is used).

I had these tests around since commit 69e7235c93e2; add them now that we
have the recovery test framework in place.
2016-03-01 19:53:18 -03:00
bda0b08198 Add README in src/test and src/test/modules
Author: Craig Ringer
Reviewed by: Michaël Paquier
2016-02-25 21:08:32 -03:00
48c9f2889a Update copyright for 2016
On closer inspection, the reason copyright.pl was missing files is
that it is looking for 'Copyright (c)' and they had 'Copyright (C)'.
Fix that, and update a couple more that grepping for that revealed.
2016-01-02 14:19:48 -05:00
ad08bf5c8b Update copyright for 2016
Manually fix some copyright lines missed by the automated script.
2016-01-02 14:08:55 -05:00
ee94300446 Update copyright for 2016
Backpatch certain files through 9.1
2016-01-02 13:33:40 -05:00
a8d585c091 Message style improvements
Message style, plurals, quoting, spelling, consistency with similar
messages
2015-10-28 20:38:36 -04:00