first commit for openGauss connector jdbc

This commit is contained in:
lishifu
2020-06-30 14:58:21 +08:00
parent f2f872da1f
commit 8617931d2b
731 changed files with 137912 additions and 73 deletions

View File

@ -0,0 +1,29 @@
---
layout: default_docs
title: Arrays
header: Chapter 9. PostgreSQL™ Extensions to the JDBC API
resource: media
previoustitle: Physical and Logical replication API
previous: replication.html
nexttitle: Chapter 10. Using the Driver in a Multithreaded or a Servlet Environment
next: thread.html
---
PostgreSQL™ provides robust support for array data types as column types, function arguments
and criteria in where clauses. There are several ways to create arrays with pgjdbc.
The [java.sql.Connection.createArrayOf(String, Object\[\])](https://docs.oracle.com/javase/8/docs/api/java/sql/Connection.html#createArrayOf-java.lang.String-java.lang.Object:A-) can be used to create an [java.sql.Array](https://docs.oracle.com/javase/8/docs/api/java/sql/Array.html) from `Object[]` instances (Note: this includes both primitive and object multi-dimensional arrays).
A similar method `org.postgresql.PGConnection.createArrayOf(String, Object)` provides support for primitive array types.
The `java.sql.Array` object returned from these methods can be used in other methods, such as [PreparedStatement.setArray(int, Array)](https://docs.oracle.com/javase/8/docs/api/java/sql/PreparedStatement.html#setArray-int-java.sql.Array-).
Additionally, the following types of arrays can be used in `PreparedStatement.setObject` methods and will use the defined type mapping:
Java Type | Default PostgreSQL™ Type
--- | ---
`short[]` | `int2[]`
`int[]` | `int4[]`
`long[]` | `int8[]`
`float[]` | `float4[]`
`double[]` | `float8[]`
`boolean[]` | `bool[]`
`String[]` | `varchar[]`

View File

@ -0,0 +1,190 @@
---
layout: default_docs
title: Chapter 7. Storing Binary Data
header: Chapter 7. Storing Binary Data
resource: media
previoustitle: Chapter 6. Calling Stored Functions
previous: callproc.html
nexttitle: Chapter 8. JDBC escapes
next: escapes.html
---
PostgreSQL™ provides two distinct ways to store binary data. Binary data can be
stored in a table using the data type BYTEA or by using the Large Object feature
which stores the binary data in a separate table in a special format and refers
to that table by storing a value of type OID in your table.
In order to determine which method is appropriate you need to understand the
limitations of each method. The BYTEA data type is not well suited for storing
very large amounts of binary data. While a column of type BYTEA can hold up to
1 GB of binary data, it would require a huge amount of memory to process such a
large value. The Large Object method for storing binary data is better suited to
storing very large values, but it has its own limitations. Specifically deleting
a row that contains a Large Object reference does not delete the Large Object.
Deleting the Large Object is a separate operation that needs to be performed.
Large Objects also have some security issues since anyone connected to the
database can view and/or modify any Large Object, even if they don't have
permissions to view/update the row containing the Large Object reference.
Version 7.2 was the first release of the JDBC driver that supports the BYTEA
data type. The introduction of this functionality in 7.2 has introduced a change
in behavior as compared to previous releases. Since 7.2, the methods `getBytes()`,
`setBytes()`, `getBinaryStream()`, and `setBinaryStream()` operate on the BYTEA
data type. In 7.1 and earlier, these methods operated on the OID data type
associated with Large Objects. It is possible to revert the driver back to the
old 7.1 behavior by setting the property `compatible` on the `Connection` object
to the value `7.1`. More details on connection properties are available in the
section called [“Connection Parameters”](connect.html#connection-parameters).
To use the BYTEA data type you should simply use the `getBytes()`, `setBytes()`,
`getBinaryStream()`, or `setBinaryStream()` methods.
To use the Large Object functionality you can use either the `LargeObject` class
provided by the PostgreSQL™ JDBC driver, or by using the `getBLOB()` and `setBLOB()`
methods.
### Important
> You must access Large Objects within an SQL transaction block. You can start a
transaction block by calling `setAutoCommit(false)`.
[Example 7.1, “Processing Binary Data in JDBC”](binary-data.html#binary-data-example)
contains some examples on how to process binary data using the PostgreSQL™ JDBC
driver.
<a name="binary-data-example"></a>
***Example 7.1. Processing Binary Data in JDBC***
For example, suppose you have a table containing the file names of images and you
also want to store the image in a BYTEA column:
```sql
CREATE TABLE images (imgname text, img bytea);
```
To insert an image, you would use:
```java
File file = new File("myimage.gif");
FileInputStream fis = new FileInputStream(file);
PreparedStatement ps = conn.prepareStatement("INSERT INTO images VALUES (?, ?)");
ps.setString(1, file.getName());
ps.setBinaryStream(2, fis, (int)file.length());
ps.executeUpdate();
ps.close();
fis.close();
```
Here, `setBinaryStream()` transfers a set number of bytes from a stream into the
column of type BYTEA. This also could have been done using the `setBytes()` method
if the contents of the image was already in a `byte[]`.
### Note
> The length parameter to `setBinaryStream` must be correct. There is no way to
indicate that the stream is of unknown length. If you are in this situation, you
must read the stream yourself into temporary storage and determine the length.
Now with the correct length you may send the data from temporary storage on to
the driver.
Retrieving an image is even easier. (We use `PreparedStatement` here, but the
`Statement` class can equally be used.)
```java
PreparedStatement ps = conn.prepareStatement("SELECT img FROM images WHERE imgname = ?");
ps.setString(1, "myimage.gif");
ResultSet rs = ps.executeQuery();
while (rs.next())
{
byte[] imgBytes = rs.getBytes(1);
// use the data in some way here
}
rs.close();
ps.close();
```
Here the binary data was retrieved as an `byte[]`. You could have used a
`InputStream` object instead.
Alternatively you could be storing a very large file and want to use the
`LargeObject` API to store the file:
```sql
CREATE TABLE imageslo (imgname text, imgoid oid);
```
To insert an image, you would use:
```java
// All LargeObject API calls must be within a transaction block
conn.setAutoCommit(false);
// Get the Large Object Manager to perform operations with
LargeObjectManager lobj = conn.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();
// Create a new large object
long oid = lobj.createLO(LargeObjectManager.READ | LargeObjectManager.WRITE);
// Open the large object for writing
LargeObject obj = lobj.open(oid, LargeObjectManager.WRITE);
// Now open the file
File file = new File("myimage.gif");
FileInputStream fis = new FileInputStream(file);
// Copy the data from the file to the large object
byte buf[] = new byte[2048];
int s, tl = 0;
while ((s = fis.read(buf, 0, 2048)) > 0)
{
obj.write(buf, 0, s);
tl += s;
}
// Close the large object
obj.close();
// Now insert the row into imageslo
PreparedStatement ps = conn.prepareStatement("INSERT INTO imageslo VALUES (?, ?)");
ps.setString(1, file.getName());
ps.setLong(2, oid);
ps.executeUpdate();
ps.close();
fis.close();
// Finally, commit the transaction.
conn.commit();
```
Retrieving the image from the Large Object:
```java
// All LargeObject API calls must be within a transaction block
conn.setAutoCommit(false);
// Get the Large Object Manager to perform operations with
LargeObjectManager lobj = conn.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();
PreparedStatement ps = conn.prepareStatement("SELECT imgoid FROM imageslo WHERE imgname = ?");
ps.setString(1, "myimage.gif");
ResultSet rs = ps.executeQuery();
while (rs.next())
{
// Open the large object for reading
long oid = rs.getLong(1);
LargeObject obj = lobj.open(oid, LargeObjectManager.READ);
// Read the data
byte buf[] = new byte[obj.size()];
obj.read(buf, 0, obj.size());
// Do something with the data read here
// Close the object
obj.close();
}
rs.close();
ps.close();
// Finally, commit the transaction.
conn.commit();
```

View File

@ -0,0 +1,124 @@
---
layout: default_docs
title: Chapter 6. Calling Stored Functions
header: Chapter 6. Calling Stored Functions
resource: media
previoustitle: Creating and Modifying Database Objects
previous: ddl.html
nexttitle: Chapter 7. Storing Binary Data
next: binary-data.html
---
**Table of Contents**
* [Obtaining a `ResultSet` from a stored function](callproc.html#callproc-resultset)
* [From a Function Returning `SETOF` type](callproc.html#callproc-resultset-setof)
* [From a Function Returning a refcursor](callproc.html#callproc-resultset-refcursor)
<a name="call-function-example"></a>
**Example 6.1. Calling a built in stored function**
This example shows how to call a PostgreSQL™ built in function, `upper`, which
simply converts the supplied string argument to uppercase.
```java
CallableStatement upperProc = conn.prepareCall("{? = call upper( ? ) }");
upperProc.registerOutParameter(1, Types.VARCHAR);
upperProc.setString(2, "lowercase to uppercase");
upperProc.execute();
String upperCased = upperProc.getString(1);
upperProc.close();
```
<a name="callproc-resultset"></a>
# Obtaining a `ResultSet` from a stored function
PostgreSQL's™ stored functions can return results in two different ways. The
function may return either a refcursor value or a `SETOF` some datatype. Depending
on which of these return methods are used determines how the function should be
called.
<a name="callproc-resultset-setof"></a>
## From a Function Returning `SETOF` type
Functions that return data as a set should not be called via the `CallableStatement`
interface, but instead should use the normal `Statement` or `PreparedStatement`
interfaces.
<a name="setof-resultset"></a>
**Example 6.2. Getting `SETOF` type values from a function**
```java
Statement stmt = conn.createStatement();
stmt.execute("CREATE OR REPLACE FUNCTION setoffunc() RETURNS SETOF int AS "
+ "' SELECT 1 UNION SELECT 2;' LANGUAGE sql");
ResultSet rs = stmt.executeQuery("SELECT * FROM setoffunc()");
while (rs.next())
{
// do something
}
rs.close();
stmt.close();
```
<a name="callproc-resultset-refcursor"></a>
## From a Function Returning a refcursor
When calling a function that returns a refcursor you must cast the return type of
`getObject` to a `ResultSet`
### Note
> One notable limitation of the current support for a `ResultSet` created from
a refcursor is that even though it is a cursor backed `ResultSet`, all data will
be retrieved and cached on the client. The `Statement` fetch size parameter
described in the section called [“Getting results based on a cursor”](query.html#query-with-cursor)
is ignored. This limitation is a deficiency of the JDBC driver, not the server,
and it is technically possible to remove it, we just haven't found the time.
<a name="get-refcursor-from-function-call"></a>
**Example 6.3. Getting refcursor Value From a Function**
```java
// Setup function to call.
Statement stmt = conn.createStatement();
stmt.execute("CREATE OR REPLACE FUNCTION refcursorfunc() RETURNS refcursor AS '"
+ " DECLARE "
+ " mycurs refcursor; "
+ " BEGIN "
+ " OPEN mycurs FOR SELECT 1 UNION SELECT 2; "
+ " RETURN mycurs; "
+ " END;' language plpgsql");
stmt.close();
// We must be inside a transaction for cursors to work.
conn.setAutoCommit(false);
// Procedure call.
CallableStatement proc = conn.prepareCall("{? = call refcursorfunc() }");
proc.registerOutParameter(1, Types.OTHER);
proc.execute();
ResultSet results = (ResultSet) proc.getObject(1);
while (results.next())
{
// do something with the results.
}
results.close();
proc.close();
```
It is also possible to treat the refcursor return value as a cursor name directly.
To do this, use the `getString` of `ResultSet`. With the underlying cursor name,
you are free to directly use cursor commands on it, such as `FETCH` and `MOVE`.
<a name="refcursor-string-example"></a>
**Example 6.4. Treating refcursor as a cursor name**
```java
conn.setAutoCommit(false);
CallableStatement proc = conn.prepareCall("{? = call refcursorfunc() }");
proc.registerOutParameter(1, Types.OTHER);
proc.execute();
String cursorName = proc.getString(1);
proc.close();
```

View File

@ -0,0 +1,28 @@
---
layout: default_docs
title: Setting up the Class Path
header: Chapter 2. Setting up the JDBC Driver
resource: media
previoustitle: Chapter 2. Setting up the JDBC Driver
previous: setup.html
nexttitle: Preparing the Database Server for JDBC
next: prepare.html
---
To use the driver, the JAR archive named `postgresql.jar` if you built from source,
otherwise it will likely be (named with the following convention: `postgresql-*[server version]*.*[build number]*.jdbc*[JDBC version]*.jar`,
for example `postgresql-8.0-310.jdbc3.jar`) needs to be included in the class path,
either by putting it in the `CLASSPATH` environment variable, or by using flags on
the **java** command line.
For instance, assume we have an application that uses the JDBC driver to access
a database, and that application is installed as `/usr/local/lib/myapp.jar`. The
PostgreSQL™ JDBC driver installed as `/usr/local/pgsql/share/java/postgresql.jar`.
To run the application, we would use:
```bash
export CLASSPATH=/usr/local/lib/myapp.jar:/usr/local/pgsql/share/java/postgresql.jar:.
java MyApp
```
Loading the driver from within the application is covered in [Chapter 3, Initializing the Driver](use.html).

View File

@ -0,0 +1,462 @@
---
layout: default_docs
title: Connecting to the Database
header: Chapter 3. Initializing the Driver
resource: media
previoustitle: Loading the Driver
previous: load.html
nexttitle: Chapter 4. Using SSL
next: ssl.html
---
With JDBC, a database is represented by a URL (Uniform Resource Locator). With
PostgreSQL, this takes one of the following forms:
* jdbc:postgresql:*`database`*
* jdbc:postgresql:/
* jdbc:postgresql://*`host/database`*
* jdbc:postgresql://*`host/`*
* jdbc:postgresql://*`host:port/database`*
* jdbc:postgresql://*`host:port/`*
The parameters have the following meanings:
* *`host`*
The host name of the server. Defaults to `localhost`. To specify an IPv6
address your must enclose the `host` parameter with square brackets, for
example:
jdbc:postgresql://[::1]:5740/accounting
* *`port`*
The port number the server is listening on. Defaults to the PostgreSQL™
standard port number (5432).
* *`database`*
The database name. The default is to connect to a database with the same name
as the user name.
To connect, you need to get a `Connection` instance from JDBC. To do this, you use
the `DriverManager.getConnection()` method:
`Connection db = DriverManager.getConnection(url, username, password)`;
<a name="connection-parameters"></a>
## Connection Parameters
In addition to the standard connection parameters the driver supports a number
of additional properties which can be used to specify additional driver behaviour
specific to PostgreSQL™. These properties may be specified in either the connection
URL or an additional `Properties` object parameter to `DriverManager.getConnection`.
The following examples illustrate the use of both methods to establish a SSL
connection.
```java
String url = "jdbc:postgresql://localhost/test";
Properties props = new Properties();
props.setProperty("user","fred");
props.setProperty("password","secret");
props.setProperty("ssl","true");
Connection conn = DriverManager.getConnection(url, props);
String url = "jdbc:postgresql://localhost/test?user=fred&password=secret&ssl=true";
Connection conn = DriverManager.getConnection(url);
```
* **user** = String
The database user on whose behalf the connection is being made.
* **password** = String
The database user's password.
* **ssl** = boolean
Connect using SSL. The driver must have been compiled with SSL support.
This property does not need a value associated with it. The mere presence
of it specifies a SSL connection. However, for compatibility with future
versions, the value "true" is preferred. For more information see [Chapter
4, *Using SSL*](ssl.html).
* **sslfactory** = String
The provided value is a class name to use as the `SSLSocketFactory` when
establishing a SSL connection. For more information see the section
called [“Custom SSLSocketFactory”](ssl-factory.html).
* **sslfactoryarg** (deprecated) = String
This value is an optional argument to the constructor of the sslfactory
class provided above. For more information see the section called [“Custom SSLSocketFactory”](ssl-factory.html).
* **sslmode** = String
possible values include `disable`, `allow`, `prefer`, `require`, `verify-ca` and `verify-full`
. `require`, `allow` and `prefer` all default to a non validating SSL factory and do not check the
validity of the certificate or the host name. `verify-ca` validates the certificate, but does not
verify the hostname. `verify-full` will validate that the certificate is correct and verify the
host connected to has the same hostname as the certificate.
Setting these will necessitate storing the server certificate on the client machine see
["Configuring the client"](ssl-client.html) for details.
* **sslcert** = String
Provide the full path for the certificate file. Defaults to /defaultdir/postgresql.crt
*Note:* defaultdir is ${user.home}/.postgresql/ in *nix systems and %appdata%/postgresql/ on windows
* **sslkey** = String
Provide the full path for the key file. Defaults to /defaultdir/postgresql.pk8
* **sslrootcert** = String
File name of the SSL root certificate. Defaults to defaultdir/root.crt
* **sslhostnameverifier** = String
Class name of hostname verifier. Defaults to using `org.postgresql.ssl.PGjdbcHostnameVerifier`
* **sslpasswordcallback** = String
Class name of the SSL password provider. Defaults to `org.postgresql.ssl.jdbc4.LibPQFactory.ConsoleCallbackHandler`
* **sslpassword** = String
If provided will be used by ConsoleCallbackHandler
* **sendBufferSize** = int
Sets SO_SNDBUF on the connection stream
* **recvBufferSize** = int
Sets SO_RCVBUF on the connection stream
* **protocolVersion** = int
The driver supports the V3 frontend/backend protocols. The V3 protocol was introduced in 7.4 and
the driver will by default try to connect using the V3 protocol.
* **loggerLevel** = String
Logger level of the driver. Allowed values: <code>OFF</code>, <code>DEBUG</code> or <code>TRACE</code>.
This enable the <code>java.util.logging.Logger</code> Level of the driver based on the following mapping
of levels: DEBUG -&gt; FINE, TRACE -&gt; FINEST. This property is intended for debug the driver and
not for general SQL query debug.
* **loggerFile** = String
File name output of the Logger. If set, the Logger will use a <code>java.util.logging.FileHandler</code>
to write to a specified file. If the parameter is not set or the file can’t be created the
<code>java.util.logging.ConsoleHandler</code> will be used instead. This parameter should be use
together with loggerLevel.
* **allowEncodingChanges** = boolean
When using the V3 protocol the driver monitors changes in certain server
configuration parameters that should not be touched by end users. The
`client_encoding` setting is set by the driver and should not be altered.
If the driver detects a change it will abort the connection. There is
one legitimate exception to this behaviour though, using the `COPY` command
on a file residing on the server's filesystem. The only means of specifying
the encoding of this file is by altering the `client_encoding` setting.
The JDBC team considers this a failing of the `COPY` command and hopes to
provide an alternate means of specifying the encoding in the future, but
for now there is this URL parameter. Enable this only if you need to
override the client encoding when doing a copy.
* **logUnclosedConnections** = boolean
Clients may leak `Connection` objects by failing to call its `close()`
method. Eventually these objects will be garbage collected and the
`finalize()` method will be called which will close the `Connection` if
caller has neglected to do this himself. The usage of a finalizer is just
a stopgap solution. To help developers detect and correct the source of
these leaks the `logUnclosedConnections` URL parameter has been added.
It captures a stacktrace at each `Connection` opening and if the `finalize()`
method is reached without having been closed the stacktrace is printed
to the log.
* **autosave** = String
Specifies what the driver should do if a query fails. In `autosave=always` mode, JDBC driver sets a savepoint before each query,
and rolls back to that savepoint in case of failure. In `autosave=never` mode (default), no savepoint dance is made ever.
In `autosave=conservative` mode, savepoint is set for each query, however the rollback is done only for rare cases
like 'cached statement cannot change return type' or 'statement XXX is not valid' so JDBC driver rollsback and retries
The default is `never`
* **binaryTransferEnable** = String
A comma separated list of types to enable binary transfer. Either OID numbers or names.
* **binaryTransferDisable** = String
A comma separated list of types to disable binary transfer. Either OID numbers or names.
Overrides values in the driver default set and values set with binaryTransferEnable.
* **prepareThreshold** = int
Determine the number of `PreparedStatement` executions required before
switching over to use server side prepared statements. The default is
five, meaning start using server side prepared statements on the fifth
execution of the same `PreparedStatement` object. More information on
server side prepared statements is available in the section called
[“Server Prepared Statements”](server-prepare.html).
* **preparedStatementCacheQueries** = int
Determine the number of queries that are cached in each connection.
The default is 256, meaning if you use more than 256 different queries
in `prepareStatement()` calls, the least recently used ones
will be discarded. The cache allows application to benefit from
[“Server Prepared Statements”](server-prepare.html)
(see `prepareThreshold`) even if the prepared statement is
closed after each execution. The value of 0 disables the cache.
N.B.Each connection has its own statement cache.
* **preparedStatementCacheSizeMiB** = int
Determine the maximum size (in mebibytes) of the prepared queries cache
(see `preparedStatementCacheQueries`).
The default is 5, meaning if you happen to cache more than 5 MiB of queries
the least recently used ones will be discarded.
The main aim of this setting is to prevent `OutOfMemoryError`.
The value of 0 disables the cache.
* **preferQueryMode** = String
Specifies which mode is used to execute queries to database: simple means ('Q' execute, no parse, no bind, text mode only),
extended means always use bind/execute messages, extendedForPrepared means extended for prepared statements only,
extendedCacheEverything means use extended protocol and try cache every statement
(including Statement.execute(String sql)) in a query cache.
extended | extendedForPrepared | extendedCacheEverything | simple
The default is extended
* **defaultRowFetchSize** = int
Determine the number of rows fetched in `ResultSet`
by one fetch with trip to the database. Limiting the number of rows are fetch with
each trip to the database allow avoids unnecessary memory consumption
and as a consequence `OutOfMemoryException`.
The default is zero, meaning that in `ResultSet` will be fetch all rows at once.
Negative number is not available.
* **loginTimeout** = int
Specify how long to wait for establishment of a database connection. The
timeout is specified in seconds.
* **connectTimeout** = int
The timeout value used for socket connect operations. If connecting to the server
takes longer than this value, the connection is broken.
The timeout is specified in seconds and a value of zero means that it is disabled.
* **socketTimeout** = int
The timeout value used for socket read operations. If reading from the
server takes longer than this value, the connection is closed. This can
be used as both a brute force global query timeout and a method of
detecting network problems. The timeout is specified in seconds and a
value of zero means that it is disabled.
* **cancelSignalTimeout** = int
Cancel command is sent out of band over its own connection, so cancel message can itself get
stuck. This property controls "connect timeout" and "socket timeout" used for cancel commands.
The timeout is specified in seconds. Default value is 10 seconds.
* **tcpKeepAlive** = boolean
Enable or disable TCP keep-alive probe. The default is `false`.
* **unknownLength** = int
Certain postgresql types such as `TEXT` do not have a well defined length.
When returning meta-data about these types through functions like
`ResultSetMetaData.getColumnDisplaySize` and `ResultSetMetaData.getPrecision`
we must provide a value and various client tools have different ideas
about what they would like to see. This parameter specifies the length
to return for types of unknown length.
* **stringtype** = String
Specify the type to use when binding `PreparedStatement` parameters set
via `setString()`. If `stringtype` is set to `VARCHAR` (the default), such
parameters will be sent to the server as varchar parameters. If `stringtype`
is set to `unspecified`, parameters will be sent to the server as untyped
values, and the server will attempt to infer an appropriate type. This
is useful if you have an existing application that uses `setString()` to
set parameters that are actually some other type, such as integers, and
you are unable to change the application to use an appropriate method
such as `setInt()`.
* **kerberosServerName** = String
The Kerberos service name to use when authenticating with GSSAPI. This
is equivalent to libpq's PGKRBSRVNAME environment variable and defaults
to "postgres".
* **jaasApplicationName** = String
Specifies the name of the JAAS system or application login configuration.
* **jaasLogin** = boolean
Specifies whether to perform a JAAS login before authenticating with GSSAPI.
If set to `true` (the default), the driver will attempt to obtain GSS credentials
using the configured JAAS login module(s) (e.g. `Krb5LoginModule`) before
authenticating. To skip the JAAS login, for example if the native GSS
implementation is being used to obtain credentials, set this to `false`.
* **ApplicationName** = String
Specifies the name of the application that is using the connection.
This allows a database administrator to see what applications are
connected to the server and what resources they are using through views like pg_stat_activity.
* **gsslib** = String
Force either SSPI (Windows transparent single-sign-on) or GSSAPI (Kerberos, via JSSE)
to be used when the server requests Kerberos or SSPI authentication.
Permissible values are auto (default, see below), sspi (force SSPI) or gssapi (force GSSAPI-JSSE).
If this parameter is auto, SSPI is attempted if the server requests SSPI authentication,
the JDBC client is running on Windows, and the Waffle libraries required
for SSPI are on the CLASSPATH. Otherwise Kerberos/GSSAPI via JSSE is used.
Note that this behaviour does not exactly match that of libpq, which uses
Windows' SSPI libraries for Kerberos (GSSAPI) requests by default when on Windows.
gssapi mode forces JSSE's GSSAPI to be used even if SSPI is available, matching the pre-9.4 behaviour.
On non-Windows platforms or where SSPI is unavailable, forcing sspi mode will fail with a PSQLException.
Since: 9.4
* **sspiServiceClass** = String
Specifies the name of the Windows SSPI service class that forms the service
class part of the SPN. The default, POSTGRES, is almost always correct.
See: SSPI authentication (Pg docs) Service Principal Names (MSDN), DsMakeSpn (MSDN) Configuring SSPI (Pg wiki).
This parameter is ignored on non-Windows platforms.
* **useSpnego** = boolean
Use SPNEGO in SSPI authentication requests
* **sendBufferSize** = int
Sets SO_SNDBUF on the connection stream
* **receiveBufferSize** = int
Sets SO_RCVBUF on the connection stream
* **readOnly** = boolean
Put the connection in read-only mode
* **disableColumnSanitiser** = boolean
Setting this to true disables column name sanitiser.
The sanitiser folds columns in the resultset to lowercase.
The default is to sanitise the columns (off).
* **assumeMinServerVersion** = String
Assume that the server is at least the given version,
thus enabling to some optimization at connection time instead of trying to be version blind.
* **currentSchema** = String
Specify the schema to be set in the search-path.
This schema will be used to resolve unqualified object names used in statements over this connection.
* **targetServerType** = String
Allows opening connections to only servers with required state,
the allowed values are any, master, slave, secondary, preferSlave and preferSecondary.
The master/slave distinction is currently done by observing if the server allows writes.
The value preferSecondary tries to connect to secondary if any are available,
otherwise allows falls back to connecting also to master.
* **hostRecheckSeconds** = int
Controls how long in seconds the knowledge about a host state
is cached in JVM wide global cache. The default value is 10 seconds.
* **loadBalanceHosts** = boolean
In default mode (disabled) hosts are connected in the given order.
If enabled hosts are chosen randomly from the set of suitable candidates.
* **socketFactory** = String
The provided value is a class name to use as the `SocketFactory` when establishing a socket connection.
This may be used to create unix sockets instead of normal sockets. The class name specified by `socketFactory`
must extend `javax.net.SocketFactory` and be available to the driver's classloader.
This class must have a zero argument constructor or a single argument constructor taking a String argument.
This argument may optionally be supplied by `socketFactoryArg`.
* **socketFactoryArg** (deprecated) = String
This value is an optional argument to the constructor of the socket factory
class provided above.
* **reWriteBatchedInserts** = boolean
This will change batch inserts from insert into foo (col1, col2, col3) values (1,2,3) into
insert into foo (col1, col2, col3) values (1,2,3), (4,5,6) this provides 2-3x performance improvement
* **replication** = String
Connection parameter passed in the startup message. This parameter accepts two values; "true"
and `database`. Passing `true` tells the backend to go into walsender mode, wherein a small set
of replication commands can be issued instead of SQL statements. Only the simple query protocol
can be used in walsender mode. Passing "database" as the value instructs walsender to connect
to the database specified in the dbname parameter, which will allow the connection to be used
for logical replication from that database. <p>Parameter should be use together with
`assumeMinServerVersion` with parameter >= 9.4 (backend >= 9.4)</p>
<a name="connection-failover"></a>
## Connection Fail-over
To support simple connection fail-over it is possible to define multiple endpoints
(host and port pairs) in the connection url separated by commas.
The driver will try to once connect to each of them in order until the connection succeeds.
If none succeed, a normal connection exception is thrown.
The syntax for the connection url is:
`jdbc:postgresql://host1:port1,host2:port2/database`
The simple connection fail-over is useful when running against a high availability
postgres installation that has identical data on each node.
For example streaming replication postgres or postgres-xc cluster.
For example an application can create two connection pools.
One data source is for writes, another for reads. The write pool limits connections only to master node:
`jdbc:postgresql://node1,node2,node3/accounting?targetServerType=master`.
And read pool balances connections between slaves nodes, but allows connections also to master if no slaves are available:
`jdbc:postgresql://node1,node2,node3/accounting?targetServerType=preferSlave&loadBalanceHosts=true`
If a slave fails, all slaves in the list will be tried first. If the case that there are no available slaves
the master will be tried. If all of the servers are marked as "can't connect" in the cache then an attempt
will be made to connect to all of the hosts in the URL in order.

View File

@ -0,0 +1,44 @@
---
layout: default_docs
title: Chapter 11. Connection Pools and Data Sources
header: Chapter 11. Connection Pools and Data Sources
resource: media
previoustitle: Chapter 10. Using the Driver in a Multithreaded or a Servlet Environment
previous: thread.html
nexttitle: Application Servers ConnectionPoolDataSource
next: ds-cpds.html
---
**Table of Contents**
* [Overview](datasource.html#ds-intro)
* [Application Servers: `ConnectionPoolDataSource`](ds-cpds.html)
* [Applications: `DataSource`](ds-ds.html)
* [Tomcat setup](tomcat.html)
* [Data Sources and JNDI](jndi.html)
JDBC 2 introduced standard connection pooling features in an add-on API known as
the JDBC 2.0 Optional Package (also known as the JDBC 2.0 Standard Extension).
These features have since been included in the core JDBC 3 API.
<a name="ds-intro"></a>
# Overview
The JDBC API provides a client and a server interface for connection pooling.
The client interface is `javax.sql.DataSource`, which is what application code
will typically use to acquire a pooled database connection. The server interface
is `javax.sql.ConnectionPoolDataSource`, which is how most application servers
will interface with the PostgreSQL™ JDBC driver.
In an application server environment, the application server configuration will
typically refer to the PostgreSQL™ `ConnectionPoolDataSource` implementation,
while the application component code will typically acquire a `DataSource`
implementation provided by the application server (not by PostgreSQL™).
For an environment without an application server, PostgreSQL™ provides two
implementations of `DataSource` which an application can use directly. One
implementation performs connection pooling, while the other simply provides
access to database connections through the `DataSource` interface without any
pooling. Again, these implementations should not be used in an application server
environment unless the application server does not support the `ConnectionPoolDataSource`
interface.

View File

@ -0,0 +1,26 @@
---
layout: default_docs
title: Creating and Modifying Database Objects
header: Chapter 5. Issuing a Query and Processing the Result
resource: media
previoustitle: Performing Updates
previous: update.html
nexttitle: Using Java 8 Date and Time classes
next: java8-date-time.html
---
To create, modify or drop a database object like a table or view you use the
`execute()` method. This method is similar to the method `executeQuery()`, but
it doesn't return a result. [Example 5.4, “Dropping a Table in JDBC](ddl.html#drop-table-example)
illustrates the usage.
<a name="drop-table-example"></a>
**Example 5.4. Dropping a Table in JDBC**
This example will drop a table.
```java
Statement st = conn.createStatement();
st.execute("DROP TABLE mytable");
st.close();
```

View File

@ -0,0 +1,92 @@
---
layout: default_docs
title: Application Servers ConnectionPoolDataSource
header: Chapter 11. Connection Pools and Data Sources
resource: media
previoustitle: Chapter 11. Connection Pools and Data Sources
previous: datasource.html
nexttitle: Applications DataSource
next: ds-ds.html
---
PostgreSQL™ includes one implementation of `ConnectionPoolDataSource` named
`org.postgresql.ds.PGConnectionPoolDataSource`.
JDBC requires that a `ConnectionPoolDataSource` be configured via JavaBean
properties, shown in [Table 11.1, “`ConnectionPoolDataSource` Configuration Properties”](ds-cpds.html#ds-cpds-props),
so there are get and set methods for each of these properties.
<a name="ds-cpds-props"></a>
**Table 11.1. `ConnectionPoolDataSource` Configuration Properties**
<table summary="ConnectionPoolDataSource Configuration Properties" class="CALSTABLE" border="1">
<tr>
<th>Property</th>
<th>Type</th>
<th>Description</th>
</tr>
<tbody>
<tr>
<td>serverName</td>
<td>STRING</td>
<td>PostgreSQL™ database server
host name</td>
</tr>
<tr>
<td>databaseName</td>
<td>STRING</td>
<td>PostgreSQL™ database name</td>
</tr>
<tr>
<td>portNumber</td>
<td>INT</td>
<td> TCP port which the PostgreSQL™
database server is listening on (or 0 to use the default port) </td>
</tr>
<tr>
<td>user</td>
<td>STRING</td>
<td>User used to make database connections</td>
</tr>
<tr>
<td>password</td>
<td>STRING</td>
<td>Password used to make database connections</td>
</tr>
<tr>
<td>ssl</td>
<td>BOOLEAN</td>
<td> If `true`, use SSL encrypted
connections (default `false`) </td>
</tr>
<tr>
<td>sslfactory</td>
<td>STRING</td>
<td> Custom `javax.net.ssl.SSLSocketFactory`
class name (see the section called [“Custom
SSLSocketFactory”](ssl-factory.html)) </td>
</tr>
<tr>
<td>defaultAutoCommit</td>
<td>BOOLEAN</td>
<td> Whether connections should have autocommit enabled or
disabled when they are supplied to the caller. The default is `false`, to disable autocommit. </td>
</tr>
</tbody>
</table>
Many application servers use a properties-style syntax to configure these
properties, so it would not be unusual to enter properties as a block of text.
If the application server provides a single area to enter all the properties,
they might be listed like this:
`serverName=localhost`
`databaseName=test`
`user=testuser`
`password=testpassword`
Or, if semicolons are used as separators instead of newlines, it could look like
this:
`serverName=localhost;databaseName=test;user=testuser;password=testpassword`

View File

@ -0,0 +1,180 @@
---
layout: default_docs
title: Applications DataSource
header: Chapter 11. Connection Pools and Data Sources
resource: media
previoustitle: Application Servers ConnectionPoolDataSource
previous: ds-cpds.html
nexttitle: Tomcat setup
next: tomcat.html
---
PostgreSQL™ includes two implementations of `DataSource`, as shown in [Table 11.2, “`DataSource` Implementations”](ds-ds.html#ds-ds-imp).
One that does pooling and the other that does not. The pooling implementation
does not actually close connections when the client calls the `close` method,
but instead returns the connections to a pool of available connections for other
clients to use. This avoids any overhead of repeatedly opening and closing
connections, and allows a large number of clients to share a small number of
database connections.
The pooling data-source implementation provided here is not the most feature-rich
in the world. Among other things, connections are never closed until the pool
itself is closed; there is no way to shrink the pool. As well, connections
requested for users other than the default configured user are not pooled. Its
error handling sometimes cannot remove a broken connection from the pool. In
general it is not recommended to use the PostgreSQL™ provided connection pool.
Check your application server or check out the excellent [jakarta commons DBCP](http://jakarta.apache.org/commons/dbcp/)
project.
<a name="ds-ds-imp"></a>
**Table 11.2. `DataSource` Implementations**
<table summary="DataSource Implementations" class="CALSTABLE" border="1">
<tr>
<th>Pooling</th>
<th>Implementation Class</th>
</tr>
<tbody>
<tr>
<td>No</td>
<td>`org.postgresql.ds.PGSimpleDataSource</td>
</tr>
<tr>
<td>Yes</td>
<td>`org.postgresql.ds.PGPoolingDataSource</td>
</tr>
</tbody>
</table>
Both implementations use the same configuration scheme. JDBC requires that a
`DataSource` be configured via JavaBean properties, shown in [Table 11.3, “`DataSource` Configuration Properties”](ds-ds.html#ds-ds-props),
so there are get and set methods for each of these properties.
<a name="ds-ds-props"></a>
**Table 11.3. `DataSource` Configuration Properties**
<table summary="DataSource Configuration Properties" class="CALSTABLE" border="1">
<tr>
<th>Property</th>
<th>Type</th>
<th>Description</th>
</tr>
<tbody>
<tr>
<td>serverName</td>
<td>STRING</td>
<td>PostgreSQL™ database server host name</td>
</tr>
<tr>
<td>databaseName</td>
<td>STRING</td>
<td>PostgreSQL™ database name</td>
</tr>
<tr>
<td>portNumber</td>
<td>INT</td>
<td>TCP port which the PostgreSQL™
database server is listening on (or 0 to use the default port)</td>
</tr>
<tr>
<td>user</td>
<td>STRING</td>
<td>User used to make database connections</td>
</tr>
<tr>
<td>password</td>
<td>STRING</td>
<td>Password used to make database connections</td>
</tr>
<tr>
<td>ssl</td>
<td>BOOLEAN</td>
<td> If true, use SSL encrypted
connections (default false) </td>
</tr>
<tr>
<td>sslfactory</td>
<td>STRING</td>
<td> Custom javax.net.ssl.SSLSocketFactory
class name (see the section called [“Custom
SSLSocketFactory”](ssl-factory.html))</td>
</tr>
</tbody>
</table>
The pooling implementation requires some additional configuration properties,
which are shown in [Table 11.4, “Additional Pooling `DataSource` Configuration Properties](ds-ds.html#ds-ds-xprops).
<a name="ds-ds-xprops"></a>
**Table 11.4. Additional Pooling `DataSource` Configuration Properties**
<table summary="Additional Pooling DataSource Configuration Properties" class="CALSTABLE" border="1">
<tr>
<th>Property</th>
<th>Type</th>
<th>Description</th>
</tr>
<tbody>
<tr>
<td>dataSourceName</td>
<td>STRING</td>
<td>Every pooling DataSource must
have a unique name.</td>
</tr>
<tr>
<td>initialConnections</td>
<td>INT</td>
<td>The number of database connections to be created when the
pool is initialized.</td>
</tr>
<tr>
<td>maxConnections</td>
<td>INT</td>
<td>The maximum number of open database connections to allow.
When more connections are requested, the caller will hang until a
connection is returned to the pool.</td>
</tr>
</tbody>
</table>
[Example 11.1, “`DataSource` Code Example”](ds-ds.html#ds-example) shows an example
of typical application code using a pooling `DataSource`.
<a name="ds-example"></a>
**Example 11.1. `DataSource` Code Example**
Code to initialize a pooling `DataSource` might look like this:
```java
PGPoolingDataSource source = new PGPoolingDataSource();
source.setDataSourceName("A Data Source");
source.setServerName("localhost");
source.setDatabaseName("test");
source.setUser("testuser");
source.setPassword("testpassword");
source.setMaxConnections(10);
```
Then code to use a connection from the pool might look like this. Note that it
is critical that the connections are eventually closed. Else the pool will
“leak” connections and will eventually lock all the clients out.
```java
Connection conn = null;
try
{
conn = source.getConnection();
// use connection
}
catch (SQLException e)
{
// log error
}
finally
{
if (con != null)
{
try { conn.close(); } catch (SQLException e) {}
}
}
```

View File

@ -0,0 +1,480 @@
---
layout: default_docs
title: Escaped scalar functions
header: Chapter 8. JDBC escapes
resource: media
previoustitle: Date-time escapes
previous: escapes-datetime.html
nexttitle: Chapter 9. PostgreSQL™ Extensions to the JDBC API
next: ext.html
---
The JDBC specification defines functions with an escape call syntax : `{fn function_name(arguments)}`.
The following tables show which functions are supported by the PostgreSQL™ driver.
The driver supports the nesting and the mixing of escaped functions and escaped
values. The appendix C of the JDBC specification describes the functions.
Some functions in the following tables are translated but not reported as supported
because they are duplicating or changing their order of the arguments. While this
is harmless for literal values or columns, it will cause problems when using
prepared statements. For example "`{fn right(?,?)}`" will be translated to "`substring(? from (length(?)+1-?))`".
As you can see the translated SQL requires more parameters than before the
translation but the driver will not automatically handle this.
<a name="escape-numeric-functions-table"></a>
**Table 8.1. Supported escaped numeric functions**
<table summary="Supported escaped numeric functions" class="CALSTABLE" border="1">
<tr>
<th>function</th>
<th>reported as supported</th>
<th>translation</th>
<th>comments</th>
</tr>
<tbody>
<tr>
<td>abs(arg1)</td>
<td>yes</td>
<td>abs(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>acos(arg1)</td>
<td>yes</td>
<td>acos(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>asin(arg1)</td>
<td>yes</td>
<td>asin(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>atan(arg1)</td>
<td>yes</td>
<td>atan(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>atan2(arg1,arg2)</td>
<td>yes</td>
<td>atan2(arg1,arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>ceiling(arg1)</td>
<td>yes</td>
<td>ceil(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>cos(arg1)</td>
<td>yes</td>
<td>cos(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>cot(arg1)</td>
<td>yes</td>
<td>cot(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>degrees(arg1)</td>
<td>yes</td>
<td>degrees(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>exp(arg1)</td>
<td>yes</td>
<td>exp(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>floor(arg1)</td>
<td>yes</td>
<td>floor(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>log(arg1)</td>
<td>yes</td>
<td>ln(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>log10(arg1)</td>
<td>yes</td>
<td>log(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>mod(arg1,arg2)</td>
<td>yes</td>
<td>mod(arg1,arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>pi(arg1)</td>
<td>yes</td>
<td>pi(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>power(arg1,arg2)</td>
<td>yes</td>
<td>pow(arg1,arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>radians(arg1)</td>
<td>yes</td>
<td>radians(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>rand()</td>
<td>yes</td>
<td>random()</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>rand(arg1)</td>
<td>yes</td>
<td>setseed(arg1)*0+random()</td>
<td>The seed is initialized with the given argument and a new randow value is returned.</td>
</tr>
<tr>
<td>round(arg1,arg2)</td>
<td>yes</td>
<td>round(arg1,arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>sign(arg1)</td>
<td>yes</td>
<td>sign(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>sin(arg1)</td>
<td>yes</td>
<td>sin(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>sqrt(arg1)</td>
<td>yes</td>
<td>sqrt(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>tan(arg1)</td>
<td>yes</td>
<td>tan(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>truncate(arg1,arg2)</td>
<td>yes</td>
<td>trunc(arg1,arg2)</td>
<td>&nbsp;</td>
</tr>
</tbody>
</table>
<a name="escape-string-functions-table"></a>
**Table 8.2. Supported escaped string functions**
<table summary="Supported escaped string functions" class="CALSTABLE" border="1">
<tr>
<th>function</th>
<th>reported as supported</th>
<th>translation</th>
<th>comments</th>
</tr>
<tbody>
<tr>
<td>ascii(arg1)</td>
<td>yes</td>
<td>ascii(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>char(arg1)</td>
<td>yes</td>
<td>chr(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>concat(arg1,arg2...)</td>
<td>yes</td>
<td>(arg1||arg2...)</td>
<td>The JDBC specification
only require the two arguments version, but supporting more arguments
was so easy...</td>
</tr>
<tr>
<td>insert(arg1,arg2,arg3,arg4)</td>
<td>no</td>
<td>overlay(arg1 placing arg4 from arg2 for arg3)</td>
<td>This function is not reported as supported since it changes
the order of the arguments which can be a problem (for prepared
statements by example).</td>
</tr>
<tr>
<td>lcase(arg1)</td>
<td>yes</td>
<td>lower(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>left(arg1,arg2)</td>
<td>yes</td>
<td>substring(arg1 for arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>length(arg1)</td>
<td>yes</td>
<td>length(trim(trailing from arg1))</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>locate(arg1,arg2)</td>
<td>no</td>
<td>position(arg1 in arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>locate(arg1,arg2,arg3)</td>
<td>no</td>
<td>(arg2*sign(position(arg1 in substring(arg2 from
arg3)+position(arg1 in substring(arg2 from arg3))</td>
<td>Not reported as supported since the three arguments version
duplicate and change the order of the arguments.</td>
</tr>
<tr>
<td>ltrim(arg1)</td>
<td>yes</td>
<td>trim(leading from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>repeat(arg1,arg2)</td>
<td>yes</td>
<td>repeat(arg1,arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>replace(arg1,arg2,arg3)</td>
<td>yes</td>
<td>replace(arg1,arg2,arg3)</td>
<td>Only reported as supported by 7.3 and above servers.</td>
</tr>
<tr>
<td>right(arg1,arg2)</td>
<td>no</td>
<td>substring(arg1 from (length(arg1)+1-arg2))</td>
<td>Not reported as supported since arg2 is duplicated.</td>
</tr>
<tr>
<td>rtrim(arg1)</td>
<td>yes</td>
<td>trim(trailing from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>space(arg1)</td>
<td>yes</td>
<td>repeat(' ',arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>substring(arg1,arg2)</td>
<td>yes</td>
<td>substr(arg1,arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>substring(arg1,arg2,arg3)</td>
<td>yes</td>
<td>substr(arg1,arg2,arg3)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>ucase(arg1)</td>
<td>yes</td>
<td>upper(arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>soundex(arg1)</td>
<td>no</td>
<td>soundex(arg1)</td>
<td>Not reported as supported since it requires the fuzzystrmatch
contrib module.</td>
</tr>
<tr>
<td>difference(arg1,arg2)</td>
<td>no</td>
<td>difference(arg1,arg2)</td>
<td>Not reported as supported since it requires the fuzzystrmatch
contrib module.</td>
</tr>
</tbody>
</table>
<a name="escape-datetime-functions-table"></a>
**Table 8.3. Supported escaped date/time functions**
<table summary="Supported escaped date/time functions" class="CALSTABLE" border="1">
<tr>
<th>function</th>
<th>reported as supported</th>
<th>translation</th>
<th>comments</th>
</tr>
<tbody>
<tr>
<td>curdate()</td>
<td>yes</td>
<td>current_date</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>curtime()</td>
<td>yes</td>
<td>current_time</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>dayname(arg1)</td>
<td>yes</td>
<td>to_char(arg1,'Day')</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>dayofmonth(arg1)</td>
<td>yes</td>
<td>extract(day from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>dayofweek(arg1)</td>
<td>yes</td>
<td>extract(dow from arg1)+1</td>
<td>We must add 1 to be in the expected 1-7 range.</td>
</tr>
<tr>
<td>dayofyear(arg1)</td>
<td>yes</td>
<td>extract(doy from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>hour(arg1)</td>
<td>yes</td>
<td>extract(hour from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>minute(arg1)</td>
<td>yes</td>
<td>extract(minute from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>month(arg1)</td>
<td>yes</td>
<td>extract(month from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>monthname(arg1)</td>
<td>yes</td>
<td>to_char(arg1,'Month')</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>now()</td>
<td>yes</td>
<td>now()</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>quarter(arg1)</td>
<td>yes</td>
<td>extract(quarter from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>second(arg1)</td>
<td>yes</td>
<td>extract(second from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>week(arg1)</td>
<td>yes</td>
<td>extract(week from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>year(arg1)</td>
<td>yes</td>
<td>extract(year from arg1)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>timestampadd(argIntervalType,argCount,argTimeStamp)</td>
<td>yes</td>
<td>('(interval according to argIntervalType and
argCount)'+argTimeStamp)</td>
<td>an argIntervalType value of SQL_TSI_FRAC_SECOND
is not implemented since backend does not support it</td>
</tr>
<tr>
<td>timestampdiff(argIntervalType,argTimeStamp1,argTimeStamp2)</td>
<td>not</td>
<td>extract((interval according to argIntervalType) from
argTimeStamp2-argTimeStamp1 )</td>
<td>only an argIntervalType value of SQL_TSI_FRAC_SECOND, SQL_TSI_FRAC_MINUTE, SQL_TSI_FRAC_HOUR
or SQL_TSI_FRAC_DAY is supported </td>
</tr>
</tbody>
</table>
<a name="escape-misc-functions-table"></a>
**Table 8.4. Supported escaped misc functions**
<table summary="Supported escaped misc functions" class="CALSTABLE" border="1">
<tr>
<th>function</th>
<th>reported as supported</th>
<th>translation</th>
<th>comments</th>
</tr>
<tbody>
<tr>
<td>database()</td>
<td>yes</td>
<td>current_database()</td>
<td>Only reported as supported by 7.3 and above servers.</td>
</tr>
<tr>
<td>ifnull(arg1,arg2)</td>
<td>yes</td>
<td>coalesce(arg1,arg2)</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>user()</td>
<td>yes</td>
<td>user</td>
<td>&nbsp;</td>
</tr>
</tbody>
</table>

View File

@ -0,0 +1,23 @@
---
layout: default_docs
title: Date-time escapes
header: Chapter 8. JDBC escapes
resource: media
previoustitle: Escape for outer joins
previous: outer-joins-escape.html
nexttitle: Escaped scalar functions
next: escaped-functions.html
---
The JDBC specification defines escapes for specifying date, time and timestamp
values which are supported by the driver.
> date
>> `{d 'yyyy-mm-dd'}` which is translated to `DATE 'yyyy-mm-dd'`
> time
>> `{t 'hh:mm:ss'}` which is translated to `TIME 'hh:mm:ss'`
> timestamp
>> `{ts 'yyyy-mm-dd hh:mm:ss.f...'}` which is translated to `TIMESTAMP 'yyyy-mm-dd hh:mm:ss.f'`<br /><br />
>> The fractional seconds (.f...) portion of the TIMESTAMP can be omitted.

View File

@ -0,0 +1,61 @@
---
layout: default_docs
title: Chapter 8. JDBC escapes
header: Chapter 8. JDBC escapes
resource: media
previoustitle: Chapter 7. Storing Binary Data
previous: binary-data.html
nexttitle: Escape for outer joins
next: outer-joins-escape.html
---
**Table of Contents**
* [Escape for like escape character](escapes.html#like-escape)
* [Escape for outer joins](outer-joins-escape.html)
* [Date-time escapes](escapes-datetime.html)
* [Escaped scalar functions](escaped-functions.html)
The JDBC specification (like the ODBC specification) acknowledges the fact that
some vendor specific SQL may be required for certain RDBMS features. To aid
developers in writing portable JDBC applications across multiple database products,
a special escape syntax is used to specify the generic commands the developer
wants to be run. The JDBC driver translates these escape sequences into native
syntax for its specific database. For more information consult the
[Java DB Technical Documentation](http://docs.oracle.com/javadb/10.10.1.2/ref/rrefjdbc1020262.html).
The parsing of the sql statements for these escapes can be disabled using
`Statement.setEscapeProcessing(false)`.
`Connection.nativeSQL(String sql)` provides another way to have escapes processed.
It translates the given SQL to a SQL suitable for the PostgreSQL™ backend.
<a name="escape-use-example"></a>
**Example 8.1. Using JDBC escapes**
To use the JDBC escapes, you simply write your SQL replacing date/time literal
values, outer join and functions by the JDBC escape syntax. For example :
```java
ResultSet rs = st.executeQuery("SELECT {fn week({d '2005-01-24'})}");
```
is the portable version for
```java
ResultSet rs = st.executeQuery("SELECT extract(week from DATE '2005-01-24')");
```
<a name="like-escape"></a>
# Escape for like escape character
You can specify which escape character to use in strings comparison (with `LIKE`)
to protect wildcards characters ('%' and '_') by adding the following escape :
`{escape 'escape-character'}`. The driver supports this only at the end of the
comparison expression.
For example, you can compare string values using '|' as escape character to protect '_' :
```java
rs = stmt.executeQuery("select str2 from comparisontest where str1 like '|_abcd' {escape '|'} ");
```

View File

@ -0,0 +1,40 @@
---
layout: default_docs
title: Chapter 9. PostgreSQL™ Extensions to the JDBC API
header: Chapter 9. PostgreSQL™ Extensions to the JDBC API
resource: media
previoustitle: Escaped scalar functions
previous: escaped-functions.html
nexttitle: Geometric Data Types
next: geometric.html
---
**Table of Contents**
* [Accessing the Extensions](ext.html#extensions)
* [Geometric Data Types](geometric.html)
* [Large Objects](largeobjects.html)
* [Listen / Notify](listennotify.html)
* [Server Prepared Statements](server-prepare.html)
* [Physical and Logical replication API](replication.html)
* [Arrays](arrays.html)
PostgreSQL™ is an extensible database system. You can add your own functions to
the server, which can then be called from queries, or even add your own data types.
As these are facilities unique to PostgreSQL™, we support them from Java, with a
set of extension APIs. Some features within the core of the standard driver
actually use these extensions to implement Large Objects, etc.
<a name="extensions"></a>
# Accessing the Extensions
To access some of the extensions, you need to use some extra methods in the
`org.postgresql.PGConnection` class. In this case, you would need to case the
return value of `Driver.getConnection()`. For example:
```java
Connection db = Driver.getConnection(url, username, password);
// ...
// later on
Fastpath fp = db.unwrap(org.postgresql.PGConnection.class).getFastpathAPI();
```

View File

@ -0,0 +1,64 @@
---
layout: default_docs
title: Geometric Data Types
header: Chapter 9. PostgreSQL™ Extensions to the JDBC API
resource: media
previoustitle: Chapter 9. PostgreSQL™ Extensions to the JDBC API
previous: ext.html
nexttitle: Large Objects
next: largeobjects.html
---
PostgreSQL™ has a set of data types that can store geometric features into a
table. These include single points, lines, and polygons. We support these types
in Java with the org.postgresql.geometric package. Please consult the Javadoc
mentioned in [Chapter 13, *Further Reading*](reading.html) for details of
available classes and features.
<a name="geometric-circle-example"></a>
**Example 9.1. Using the CIRCLE datatype JDBC**
```java
import java.sql.*;
import org.postgresql.geometric.PGpoint;
import org.postgresql.geometric.PGcircle;
public class GeometricTest {
public static void main(String args[]) throws Exception {
String url = "jdbc:postgresql://localhost:5432/test";
try (Connection conn = DriverManager.getConnection(url, "test", "")) {
try (Statement stmt = conn.createStatement()) {
stmt.execute("CREATE TEMP TABLE geomtest(mycirc circle)");
}
insertCircle(conn);
retrieveCircle(conn);
}
}
private static void insertCircle(Connection conn) throws SQLException {
PGpoint center = new PGpoint(1, 2.5);
double radius = 4;
PGcircle circle = new PGcircle(center, radius);
try (PreparedStatement ps = conn.prepareStatement("INSERT INTO geomtest(mycirc) VALUES (?)")) {
ps.setObject(1, circle);
ps.executeUpdate();
}
}
private static void retrieveCircle(Connection conn) throws SQLException {
try (Statement stmt = conn.createStatement()) {
try (ResultSet rs = stmt.executeQuery("SELECT mycirc, area(mycirc) FROM geomtest")) {
while (rs.next()) {
PGcircle circle = (PGcircle)rs.getObject(1);
double area = rs.getDouble(2);
System.out.println("Center (X, Y) = (" + circle.center.x + ", " + circle.center.y + ")");
System.out.println("Radius = " + circle.radius);
System.out.println("Area = " + area);
}
}
}
}
}
```

View File

@ -0,0 +1,258 @@
<!DOCTYPE html>
<html>
<head>
<title>The PostgreSQL JDBC Interface</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="description" content="The official documentation for the PostgreSQL JDBC Driver" />
<meta name="copyright" content="The PostgreSQL Global Development Group" />
<style type="text/css" media="screen" title="Normal Text">@import url("media/css/docs.css");</style>
<link rel="stylesheet" type="text/css" href="media/css/syntax.css">
<link rel="shortcut icon" href="media/favicon.ico" />
</head>
<body>
<div id="docHeader">
<div id="docHeaderLogo">
<a href="http://www.postgresql.org/" title="PostgreSQL"><img src="media/img/layout/hdr_left3a.png" alt="PostgreSQL" height="80" width="390" /></a>
</div>
</div>
<div id="docContainerWrap">
<div id="docContainer">
<div id="docContent">
<div class="BOOK">
<a name="POSTGRES" id="POSTGRES"></a>
<div class="TITLEPAGE">
<h1 class="TITLE">The PostgreSQL JDBC Interface</h1>
<hr />
</div>
<div class="TOC">
<h3 class="c2">Table of Contents</h3>
<dl>
<!-- <dt class="c1">Table of Contents</dt> -->
<dt>1. <a href="intro.html">Introduction</a></dt>
<dt>2. <a href="setup.html">Setting up the JDBC Driver</a></dt>
<dd>
<dl>
<dt><a href="setup.html#build">Getting the Driver</a></dt>
<dt><a href="classpath.html">Setting up the Class Path</a></dt>
<dt><a href="prepare.html">Preparing the Database Server for <acronym class="ACRONYM">JDBC</acronym></a></dt>
<dt><a href="your-database.html">Creating a Database</a></dt>
</dl>
</dd>
<dt>3. <a href="use.html">Initializing the Driver</a></dt>
<dd>
<dl>
<dt><a href="use.html#import">Importing <acronym class="ACRONYM">JDBC</acronym></a></dt>
<dt><a href="load.html">Loading the Driver</a></dt>
<dt><a href="connect.html">Connecting to the Database</a></dt>
<dd>
<dl>
<dt><a href="connect.html#connection-parameters">Connection Parameters</a></dt>
</dl>
</dd>
</dl>
</dd>
<dt>4. <a href="ssl.html">Using <acronym class="ACRONYM">SSL</acronym></a></dt>
<dd>
<dl>
<dt><a href="ssl.html#ssl-server">Configuring the Server</a></dt>
<dt><a href="ssl-client.html">Configuring the Client</a></dt>
<dd>
<dl>
<dt><a href="ssl-client.html#nonvalidating">Using SSL without Certificate Validation</a></dt>
</dl>
</dd>
<dt><a href="ssl-factory.html">Custom SSLSocketFactory</a></dt>
</dl>
</dd>
<dt>5. <a href="query.html">Issuing a Query and Processing the Result</a></dt>
<dd>
<dl>
<dt><a href="query.html#query-with-cursor">Getting results based on a cursor</a></dt>
<dt><a href="statement.html">Using the Statement or PreparedStatement Interface</a></dt>
<dt><a href="resultset.html">Using the ResultSet Interface</a></dt>
<dt><a href="update.html">Performing Updates</a></dt>
<dt><a href="ddl.html">Creating and Modifying Database Objects</a></dt>
<dt><a href="java8-date-time.html">Using Java 8 Date and Time classes</a></dt>
</dl>
</dd>
<dt>6. <a href="callproc.html">Calling Stored Functions</a></dt>
<dd>
<dl>
<dt><a href="callproc.html#callproc-resultset">Obtaining a ResultSet from a stored function</a></dt>
<dd>
<dl>
<dt><a href="callproc.html#callproc-resultset-setof">From a Function Returning SETOF type</a></dt>
<dt><a href="callproc.html#callproc-resultset-refcursor">From a Function Returning a <span class="type">refcursor</span></a></dt>
</dl>
</dd>
</dl>
</dd>
<dt>7. <a href="binary-data.html">Storing Binary Data</a></dt>
<dt>8. <a href="escapes.html"><acronym class="ACRONYM">JDBC</acronym> escapes</a></dt>
<dd>
<dl>
<dt><a href="escapes.html#like-escape">Escape for like escape character</a></dt>
<dt><a href="outer-joins-escape.html">Escape for outer joins</a></dt>
<dt><a href="escapes-datetime.html">Date-time escapes</a></dt>
<dt><a href="escaped-functions.html">Escaped scalar functions</a></dt>
</dl>
</dd>
<dt>9. <a href="ext.html">PostgreSQL™ Extensions to the <acronym class="ACRONYM">JDBC</acronym> <acronym class="ACRONYM">API</acronym></a></dt>
<dd>
<dl>
<dt><a href="ext.html#extensions">Accessing the Extensions</a></dt>
<dt><a href="geometric.html">Geometric Data Types</a></dt>
<dt><a href="largeobjects.html">Large Objects</a></dt>
<dt><a href="listennotify.html">Listen / Notify</a></dt>
<dt><a href="server-prepare.html">Server Prepared Statements</a></dt>
<dt><a href="replication.html">Physical and Logical replication API</a></dt>
<dt><a href="arrays.html">Arrays</a></dt>
</dl>
</dd>
<dt>10. <a href="thread.html">Using the Driver in a Multithreaded or a Servlet Environment</a></dt>
<dt>11. <a href="datasource.html">Connection Pools and Data Sources</a></dt>
<dd>
<dl>
<dt><a href="datasource.html#ds-intro">Overview</a></dt>
<dt><a href="ds-cpds.html">Application Servers: ConnectionPoolDataSource</a></dt>
<dt><a href="ds-ds.html">Applications: DataSource</a></dt>
<dt><a href="tomcat.html">Tomcat setup</a></dt>
<dt><a href="jndi.html">Data Sources and <acronym class="ACRONYM">JNDI</acronym></a></dt>
</dl>
</dd>
<dt>12. <a href="logging.html">Logging with java.util.logging</a></dt>
<dt>13. <a href="reading.html">Further Reading</a></dt>
</dl>
</div>
<div class="LOT">
<h3 class="c2">List of Tables</h3>
<dl class="LOT">
<!-- <dt class="c1">List of Tables</dt> -->
<dt>
8.1. <a href="escaped-functions.html#escape-numeric-functions-table">Supported escaped numeric functions</a>
</dt>
<dt>
8.2. <a href="escaped-functions.html#escape-string-functions-table">Supported escaped string functions</a>
</dt>
<dt>
8.3. <a href="escaped-functions.html#escape-datetime-functions-table">Supported escaped date/time functions</a>
</dt>
<dt>
8.4. <a href="escaped-functions.html#escape-misc-functions-table">Supported escaped misc functions</a>
</dt>
<dt>
11.1. <a href="ds-cpds.html#ds-cpds-props">ConnectionPoolDataSource Configuration Properties</a>
</dt>
<dt>
11.2. <a href="ds-ds.html#ds-ds-imp">DataSource Implementations</a>
</dt>
<dt>
11.3. <a href="ds-ds.html#ds-ds-props">DataSource Configuration Properties</a>
</dt>
<dt>
11.4. <a href="ds-ds.html#ds-ds-xprops">Additional Pooling DataSource Configuration Properties</a>
</dt>
</dl>
</div>
<div class="LOT">
<h3 class="c2">List of Examples</h3>
<dl class="LOT">
<!-- <dt class="c1">List of Examples</dt> -->
<dt>
5.1. <a href="query.html#query-example">Processing a Simple Query in <acronym class="ACRONYM">JDBC</acronym></a>
</dt>
<dt>
5.2. <a href="query.html#fetchsize-example">Setting fetch size to turn cursors on and off.</a>
</dt>
<dt>
5.3. <a href="update.html#delete-example">Deleting Rows in <acronym class="ACRONYM">JDBC</acronym></a>
</dt>
<dt>
5.4. <a href="ddl.html#drop-table-example">Dropping a Table in <acronym class="ACRONYM">JDBC</acronym></a>
</dt>
<dt>
6.1. <a href="callproc.html#call-function-example">Calling a built in stored function</a>
</dt>
<dt>
6.2. <a href="callproc.html#setof-resultset"> Getting SETOF type values from a function</a>
</dt>
<dt>
6.3. <a href="callproc.html#get-refcursor-from-function-call"> Getting <span class="type">refcursor</span> Value From a Function</a>
</dt>
<dt>
6.4. <a href="callproc.html#refcursor-string-example">Treating <span class="type">refcursor</span> as a cursor name</a>
</dt>
<dt>
7.1. <a href="binary-data.html#binary-data-example">Processing Binary Data in <acronym class="ACRONYM">JDBC</acronym></a>
</dt>
<dt>
8.1. <a href="escapes.html#escape-use-example">Using jdbc escapes</a>
</dt>
<dt>
9.1. <a href="geometric.html#geometric-circle-example">Using the CIRCLE datatype from <acronym class="ACRONYM">JDBC</acronym></a>
</dt>
<dt>
9.2. <a href="listennotify.html#listen-notify-example">Receiving Notifications</a>
</dt>
<dt>
9.3. <a href="server-prepare.html#server-prepared-statement-example">Using server side prepared statements</a>
</dt>
<dt>
11.1. <a href="ds-ds.html#ds-example">DataSource Code Example</a>
</dt>
<dt>
11.2. <a href="jndi.html#ds-jndi">DataSource <acronym class="ACRONYM">JNDI</acronym> Code Example</a>
</dt>
</dl>
</div>
</div> <!-- BOOK -->
<div class="NAVFOOTER">
<hr class="c2" width="100%" />
<table summary="Footer navigation table" width="100%" border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td valign="top" width="33%" align="left">&nbsp;</td>
<td valign="top" width="34%" align="center">&nbsp;</td>
<td valign="top" width="33%" align="right"><a href="intro.html" accesskey="N">Next</a></td>
</tr>
<tr>
<td valign="top" width="33%" align="left">&nbsp;</td>
<td valign="top" width="34%" align="center">&nbsp;</td>
<td valign="top" width="33%" align="right">Chapter 1. Introduction</td>
</tr>
</tbody>
</table>
</div>
</div> <!--docContent -->
<div id="docComments"></div>
<div id="docFooter">
<a class="navFooter" href="http://www.postgresql.org/about/privacypolicy">Privacy Policy</a> |
<a class="navFooter" href="http://www.postgresql.org/about/">About PostgreSQL</a><br/>
Copyright &copy; 1996-2017 The PostgreSQL Global Development Group
</div> <!-- pgFooter -->
</div> <!-- docContainer -->
</div> <!-- docContainerWrap -->
</body>

View File

@ -0,0 +1,29 @@
---
layout: default_docs
title: Chapter 1. Introduction
header: Chapter 1. Introduction
resource: media
previoustitle: The PostgreSQL™ JDBC Interface
previous: index.html
nexttitle: Chapter 2. Setting up the JDBC Driver
next: setup.html
---
Java Database Connectivity (JDBC) is an application programming interface (API) for
the programming language Java, which defines how a client may access a database.
It is part of the Java Standard Edition platform and provides methods to query and
update data in a database, and is oriented towards relational databases.
PostgreSQL JDBC Driver (PgJDBC for short) allows Java programs to connect to a PostgreSQL
database using standard, database independent Java code. Is an open source JDBC driver
written in Pure Java (Type 4), and communicates in the PostgreSQL native network protocol.
Because of this, the driver is platform independent; once compiled, the driver
can be used on any system.
The current version of the driver should be compatible with PostgreSQL 8.2 and higher
using the version 3.0 of the PostgreSQL protocol, and it's compatible with Java 6 (JDBC 4.0),
Java 7 (JDBC 4.1) and Java 8 (JDBC 4.2).
This manual is not intended as a complete guide to JDBC programming, but should
help to get you started. For more information refer to the standard JDBC API
documentation. Also, take a look at the examples included with the source.

View File

@ -0,0 +1,80 @@
---
layout: default_docs
title: Using Java 8 Date and Time classes
header: Chapter 5. Using Java 8 Date and Time classes
resource: media
previoustitle: Creating and Modifying Database Objects
previous: ddl.html
nexttitle: Chapter 6. Calling Stored Functions
next: callproc.html
---
The PostgreSQL™ JDBC driver implements native support for the
[Java 8 Date and Time API](http://www.oracle.com/technetwork/articles/java/jf14-date-time-2125367.html)
(JSR-310) using JDBC 4.2.
<a name="8-date-time-supported-data-types"></a>
**Table 5.1. Supported Java 8 Date and Time classes**
<table summary="Supported data type" class="CALSTABLE" border="1">
<tr>
<th>PostgreSQL™</th>
<th>Java SE 8</th>
</tr>
<tbody>
<tr>
<td>DATE</td>
<td>LocalDate</td>
</tr>
<tr>
<td>TIME [ WITHOUT TIME ZONE ]</td>
<td>LocalTime</td>
</tr>
<tr>
<td>TIMESTAMP [ WITHOUT TIME ZONE ]</td>
<td>LocalDateTime</td>
</tr>
<tr>
<td>TIMESTAMP WITH TIME ZONE</td>
<td>OffsetDateTime</td>
</tr>
</tbody>
</table>
This is closely aligned with tables B-4 and B-5 of the JDBC 4.2 specification.
Note that `ZonedDateTime`, `Instant` and
`OffsetTime / TIME WITH TIME ZONE` are not supported. Also note
that all `OffsetDateTime` instances will have be in UTC (have offset 0).
This is because the backend stores them as UTC.
<a name="reading-example"></a>
**Example 5.2. Reading Java 8 Date and Time values using JDBC**
```java
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("SELECT * FROM mytable WHERE columnfoo = 500");
while (rs.next())
{
System.out.print("Column 1 returned ");
LocalDate localDate = rs.getObject(1, LocalDate.class));
System.out.println(localDate);
}
rs.close();
st.close();
```
For other data types simply pass other classes to `#getObject`.
Note that the Java data types needs to match the SQL data types in table 7.1.
<a name="writing-example"></a>
**Example 5.3. Writing Java 8 Date and Time values using JDBC**
```java
LocalDate localDate = LocalDate.now();
PreparedStatement st = conn.prepareStatement("INSERT INTO mytable (columnfoo) VALUES (?)");
st.setObject(1, localDate);
st.executeUpdate();
st.close();
```

View File

@ -0,0 +1,70 @@
---
layout: default_docs
title: Data Sources and JNDI
header: Chapter 11. Connection Pools and Data Sources
resource: media
previoustitle: Tomcat setup
previous: tomcat.html
nexttitle: Chapter 12. Logging with java.util.logging
next: logging.html
---
All the `ConnectionPoolDataSource` and `DataSource` implementations can be stored
in JNDI. In the case of the nonpooling implementations, a new instance will be
created every time the object is retrieved from JNDI, with the same settings as
the instance that was stored. For the pooling implementations, the same instance
will be retrieved as long as it is available (e.g., not a different JVM retrieving
the pool from JNDI), or a new instance with the same settings created otherwise.
In the application server environment, typically the application server's
`DataSource` instance will be stored in JNDI, instead of the PostgreSQL™
`ConnectionPoolDataSource` implementation.
In an application environment, the application may store the `DataSource` in JNDI
so that it doesn't have to make a reference to the `DataSource` available to all
application components that may need to use it. An example of this is shown in
[Example 11.2, “`DataSource` JNDI Code Example”](jndi.html#ds-jndi).
<a name="ds-jndi"></a>
**Example 11.2. `DataSource` JNDI Code Example**
Application code to initialize a pooling `DataSource` and add it to JNDI might
look like this:
```java
PGPoolingDataSource source = new PGPoolingDataSource();
source.setDataSourceName("A Data Source");
source.setServerName("localhost");
source.setDatabaseName("test");
source.setUser("testuser");
source.setPassword("testpassword");
source.setMaxConnections(10);
new InitialContext().rebind("DataSource", source);
```
Then code to use a connection from the pool might look like this:
```java
Connection conn = null;
try
{
DataSource source = (DataSource)new InitialContext().lookup("DataSource");
conn = source.getConnection();
// use connection
}
catch (SQLException e)
{
// log error
}
catch (NamingException e)
{
// DataSource wasn't found in JNDI
}
finally
{
if (con != null)
{
try { conn.close(); } catch (SQLException e) {}
}
}
```

View File

@ -0,0 +1,20 @@
---
layout: default_docs
title: Large Objects
header: Chapter 9. PostgreSQL™ Extensions to the JDBC API
resource: media
previoustitle: Geometric Data Types
previous: geometric.html
nexttitle: Listen / Notify
next: listennotify.html
---
Large objects are supported in the standard JDBC specification. However, that
interface is limited, and the API provided by PostgreSQL™ allows for random
access to the objects contents, as if it was a local file.
The org.postgresql.largeobject package provides to Java the libpq C interface's
large object API. It consists of two classes, `LargeObjectManager`, which deals
with creating, opening and deleting large objects, and `LargeObject` which deals
with an individual object. For an example usage of this API, please see
[Example 7.1, “Processing Binary Data in JDBC”](binary-data.html#binary-data-example).

View File

@ -0,0 +1,142 @@
---
layout: default_docs
title: Listen / Notify
header: Chapter 9. PostgreSQL™ Extensions to the JDBC API
resource: media
previoustitle: Large Objects
previous: largeobjects.html
nexttitle: Server Prepared Statements
next: server-prepare.html
---
Listen and Notify provide a simple form of signal or interprocess communication
mechanism for a collection of processes accessing the same PostgreSQL™ database.
For more information on notifications consult the main server documentation. This
section only deals with the JDBC specific aspects of notifications.
Standard `LISTEN`, `NOTIFY`, and `UNLISTEN` commands are issued via the standard
`Statement` interface. To retrieve and process retrieved notifications the
`Connection` must be cast to the PostgreSQL™ specific extension interface
`PGConnection`. From there the `getNotifications()` method can be used to retrieve
any outstanding notifications.
### Note
> A key limitation of the JDBC driver is that it cannot receive asynchronous
notifications and must poll the backend to check if any notifications were issued.
A timeout can be given to the poll function, but then the execution of statements
from other threads will block.
<a name="listen-notify-example"></a>
**Example 9.2. Receiving Notifications**
```java
import java.sql.*;
public class NotificationTest
{
public static void main(String args[]) throws Exception
{
Class.forName("org.postgresql.Driver");
String url = "jdbc:postgresql://localhost:5432/test";
// Create two distinct connections, one for the notifier
// and another for the listener to show the communication
// works across connections although this example would
// work fine with just one connection.
Connection lConn = DriverManager.getConnection(url,"test","");
Connection nConn = DriverManager.getConnection(url,"test","");
// Create two threads, one to issue notifications and
// the other to receive them.
Listener listener = new Listener(lConn);
Notifier notifier = new Notifier(nConn);
listener.start();
notifier.start();
}
}
class Listener extends Thread
{
private Connection conn;
private org.postgresql.PGConnection pgconn;
Listener(Connection conn) throws SQLException
{
this.conn = conn;
this.pgconn = conn.unwrap(org.postgresql.PGConnection.class);
Statement stmt = conn.createStatement();
stmt.execute("LISTEN mymessage");
stmt.close();
}
public void run()
{
try
{
while (true)
{
org.postgresql.PGNotification notifications[] = pgconn.getNotifications();
// If this thread is the only one that uses the connection, a timeout can be used to
// receive notifications immediately:
// org.postgresql.PGNotification notifications[] = pgconn.getNotifications(10000);
if (notifications != null)
{
for (int i=0; i < notifications.length; i++)
System.out.println("Got notification: " + notifications[i].getName());
}
// wait a while before checking again for new
// notifications
Thread.sleep(500);
}
}
catch (SQLException sqle)
{
sqle.printStackTrace();
}
catch (InterruptedException ie)
{
ie.printStackTrace();
}
}
}
class Notifier extends Thread
{
private Connection conn;
public Notifier(Connection conn)
{
this.conn = conn;
}
public void run()
{
while (true)
{
try
{
Statement stmt = conn.createStatement();
stmt.execute("NOTIFY mymessage");
stmt.close();
Thread.sleep(2000);
}
catch (SQLException sqle)
{
sqle.printStackTrace();
}
catch (InterruptedException ie)
{
ie.printStackTrace();
}
}
}
}
```

View File

@ -0,0 +1,30 @@
---
layout: default_docs
title: Loading the Driver
header: Chapter 3. Initializing the Driver
resource: media
previoustitle: Chapter 3. Initializing the Driver
previous: use.html
nexttitle: Connecting to the Database
next: connect.html
---
Applications do not need to explicitly load the org.postgresql.Driver
class because the pgjdbc driver jar supports the Java Service Provider
mechanism. The driver will be loaded by the JVM when the application
connects to PostgreSQL™ (as long as the driver's jar file is on the
classpath).
### Note
Prior to Java 1.6, the driver had to be loaded by the application - either by calling
```java
Class.forName("org.postgresql.Driver");
```
or by passing the driver class name as a JVM parameter.
`java -Djdbc.drivers=org.postgresql.Driver example.ImageViewer`
These older methods of loading the driver are still supported but they are no longer necessary.

View File

@ -0,0 +1,140 @@
---
layout: default_docs
title: Chapter 12. Logging using java.util.logging
header: Chapter 12. Logging using java.util.logging
resource: media
previoustitle: Data Sources and JNDI
previous: jndi.html
nexttitle: Further Reading
next: reading.html
---
**Table of Contents**
* [Overview](logging.html#overview)
* [Configuration](logging.html#configuration)
* [Enable logging by using connection properties](logging.html#conprop)
* [Enable logging by using logging.properties file](logging.html#fileprop)
<a name="overview"></a>
# Overview
The PostgreSQL JDBC Driver supports the use of logging (or tracing) to help resolve issues with the
PgJDBC Driver when is used in your application.
The PgJDBC Driver uses the logging APIs of `java.util.logging` that is part of Java since JDK 1.4,
which makes it a good choice for the driver since it don't add any external dependency for a logging
framework. `java.util.logging` is a very rich and powerful tool, it's beyond the scope of this docs
to explain or use it full potential, for that please refer to
[Java Logging Overview](https://docs.oracle.com/javase/8/docs/technotes/guides/logging/overview.html).
This logging support was added since version 42.0.0 of the PgJDBC Driver, and previous
versions uses a custom mechanism to enable logging that it is replaced by the use of
`java.util.logging` in current versions, the old mechanism is no longer available.
Please note that while most people asked the use of a Logging Framework for a long time, this
support is mainly to debug the driver itself and not for general sql query debug.
<a name="configuration"></a>
# Configuration
The Logging APIs offer both static and dynamic configuration control. Static control enables field
service staff to set up a particular configuration and then re-launch the application with the new
logging settings. Dynamic control allows for updates to the logging configuration within a currently
running program.
As part of the support of a logging framework in the PgJDBC Driver, there was a need to facilitate
the enabling of the Logger using [connection properties](logging.html#conprop), which uses a static
control to enable the tracing in the driver. Keep in mind that if you use an Application Server
(Tomcat, JBoss, WildFly, etc.) you should use the facilities provided by them to enable the logging,
as most Application Servers use dynamic configuration control which makes easy to enable/disable
logging at runtime.
The root logger used by the PgJDBC driver is `org.postgresql`.
<a name="conprop"></a>
## Enable logging by using connection properties
The driver provides a facility to enable logging using connection properties, it's not as feature rich
as using a `logging.properties` file, so it should be used when you are really debugging the driver.
The properties are `loggerLevel` and `loggerFile`:
**loggerLevel**: Logger level of the driver. Allowed values: `OFF`, `DEBUG` or `TRACE`.
This option enable the `java.util.logging.Logger` Level of the driver based on the following mapping:
<table summary="Logger Level mapping" class="CALSTABLE" border="1">
<tr>
<th>loggerLevel</th>
<th>java.util.logging</th>
</tr>
<tbody>
<tr>
<td>OFF</td>
<td>OFF</td>
</tr>
<tr>
<td>DEBUG</td>
<td>FINE</td>
</tr>
<tr>
<td>TRACE</td>
<td>FINEST</td>
</tr>
</tbody>
</table>
As noted, there are no other levels supported using this method, and internally the driver Logger levels
should not (for the most part) use others levels as the intention is to debug the driver and don't
interfere with higher levels when some applications enable them globally.
**loggerFile**: File name output of the Logger.
If set, the Logger will use a `java.util.logging.FileHandler` to write to a specified file.
If the parameter is not set or the file can't be created the `java.util.logging.ConsoleHandler`
will be used instead.
This parameter should be use together with `loggerLevel`.
The following is an example of how to use connection properties to enable logging:
```
jdbc:postgresql://localhost:5432/mydb?loggerLevel=DEBUG
jdbc:postgresql://localhost:5432/mydb?loggerLevel=TRACE&loggerFile=pgjdbc.log
```
<a name="fileprop"></a>
## Enable logging by using logging.properties file
The default Java logging framework stores its configuration in a file called `logging.properties`.
Settings are stored per line using a dot notation format. Java installs a global configuration file
in the `lib` folder of the Java installation directory, although you can use a separate configuration
file by specifying the `java.util.logging.config.file` property when starting a Java program.
`logging.properties` files can also be created and stored with individual projects.
The following is an example of setting that you can make in the `logging.properties`:
```properties
# Specify the handler, the handlers will be installed during VM startup.
handlers= java.util.logging.FileHandler
# Default global logging level.
.level= OFF
# default file output is in user's home directory.
java.util.logging.FileHandler.pattern = %h/pgjdbc%u.log
java.util.logging.FileHandler.limit = 5000000
java.util.logging.FileHandler.count = 20
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.FileHandler.level = FINEST
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$s %2$s %5$s%6$s%n
# Facility specific properties.
org.postgresql.level=FINEST
```
And when you run your application you pass the system property:
`java -jar -Djava.util.logging.config.file=logging.properties run.jar`

View File

@ -0,0 +1,450 @@
/* PostgreSQL.org Documentation Style */
@import url("global.css");
@import url("table.css");
@import url("text.css");
body {
font-size: 76%;
}
div.NAVHEADER table {
margin-left: 0;
}
/* Container Definitions */
#docContainerWrap {
text-align: center; /* Win IE5 */
}
#docContainer {
margin: 0 auto;
width: 90%;
padding-bottom: 2em;
display: block;
text-align: left; /* Win IE5 */
}
#docHeader {
background-image: url("../img/docs/bg_hdr.png");
height: 83px;
margin: 0px;
padding: 0px;
display: block;
}
#docHeaderLogo {
position: relative;
width: 206px;
height: 83px;
border: 0px;
padding: 0px;
margin: 0 0 0 20px;
}
#docHeaderLogo img {
border: 0px;
}
#docNavSearchContainer {
padding-bottom: 2px;
}
#docNav, #docVersions {
position: relative;
text-align: left;
margin-left: 10px;
margin-top: 5px;
color: #666;
font-size: 0.95em;
}
#docSearch {
position: relative;
text-align: right;
padding: 0;
margin: 0;
color: #666;
}
#docTextSize {
text-align: right;
white-space: nowrap;
margin-top: 7px;
font-size: 0.95em;
}
#docSearch form {
position: relative;
top: 5px;
right: 0;
margin: 0; /* need for IE 5.5 OSX */
text-align: right; /* need for IE 5.5 OSX */
white-space: nowrap; /* for Opera */
}
#docSearch form label {
color: #666;
font-size: 0.95em;
}
#docSearch form input {
font-size: 0.95em;
}
#docSearch form #submit {
font-size: 0.95em;
background: #7A7A7A;
color: #fff;
border: 1px solid #7A7A7A;
padding: 1px 4px;
}
#docSearch form #q {
width: 170px;
font-size: 0.95em;
border: 1px solid #7A7A7A;
background: #E1E1E1;
color: #000000;
padding: 2px;
}
.frmDocSearch {
padding: 0;
margin: 0;
display: inline;
}
.inpDocSearch {
padding: 0;
margin: 0;
color: #000;
}
#docContent {
position: relative;
margin-left: 10px;
margin-right: 10px;
margin-top: 40px;
}
#docFooter {
position: relative;
font-size: 0.9em;
color: #666;
line-height: 1.3em;
margin-left: 10px;
margin-right: 10px;
}
#docComments {
margin-top: 10px;
}
#docClear {
clear: both;
margin: 0;
padding: 0;
}
/* Heading Definitions */
h1, h2, h3 {
font-weight: bold;
margin-top: 2ex;
}
h1 {
font-size: 1.4em;
}
h2 {
font-size: 1.2em !important;
}
h3 {
font-size: 1.1em;
}
h1 a:hover {
color: #EC5800;
text-decoration: none;
}
h2 a:hover,
h3 a:hover,
h4 a:hover {
color: #666666;
text-decoration: none;
}
/* Text Styles */
div.SECT2 {
margin-top: 4ex;
}
div.SECT3 {
margin-top: 3ex;
margin-left: 3ex;
}
.txtCurrentLocation {
font-weight: bold;
}
p, ol, ul, li {
line-height: 1.5em;
}
.txtCommentsWrap {
border: 2px solid #F5F5F5;
width: 100%;
}
.txtCommentsContent {
background: #F5F5F5;
padding: 3px;
}
.txtCommentsPoster {
float: left;
}
.txtCommentsDate {
float: right;
}
.txtCommentsComment {
padding: 3px;
}
#docContainer p code,
#docContainer ul code,
#docContainer pre code,
#docContainer pre tt,
#docContainer pre pre,
#docContainer tt tt,
#docContainer tt code,
#docContainer tt pre {
font-size: 1.5em;
}
pre.LITERALLAYOUT,
.SCREEN,
.SYNOPSIS,
.PROGRAMLISTING,
.REFSYNOPSISDIV p,
table.CAUTION,
table.WARNING,
blockquote.NOTE,
blockquote.TIP,
table.CALSTABLE {
-moz-box-shadow: 3px 3px 5px #DFDFDF;
-webkit-box-shadow: 3px 3px 5px #DFDFDF;
-khtml-box-shadow: 3px 3px 5px #DFDFDF;
-o-box-shadow: 3px 3px 5px #DFDFDF;
box-shadow: 3px 3px 5px #DFDFDF;
}
pre.LITERALLAYOUT,
.SCREEN,
.SYNOPSIS,
.PROGRAMLISTING,
.REFSYNOPSISDIV p,
table.CAUTION,
table.WARNING,
blockquote.NOTE,
blockquote.TIP {
color: black;
border-width: 1px;
border-style: solid;
padding: 2ex;
margin: 2ex 0 2ex 2ex;
overflow: auto;
-moz-border-radius: 8px;
-webkit-border-radius: 8px;
-khtml-border-radius: 8px;
border-radius: 8px;
}
pre.LITERALLAYOUT,
pre.SYNOPSIS,
pre.PROGRAMLISTING,
.REFSYNOPSISDIV p,
.SCREEN {
border-color: #CFCFCF;
background-color: #F7F7F7;
}
blockquote.NOTE,
blockquote.TIP {
border-color: #DBDBCC;
background-color: #EEEEDD;
padding: 14px;
width: 572px;
}
blockquote.NOTE,
blockquote.TIP,
table.CAUTION,
table.WARNING {
margin: 4ex auto;
}
blockquote.NOTE p,
blockquote.TIP p {
margin: 0;
}
blockquote.NOTE pre,
blockquote.NOTE code,
blockquote.TIP pre,
blockquote.TIP code {
margin-left: 0;
margin-right: 0;
-moz-box-shadow: none;
-webkit-box-shadow: none;
-khtml-box-shadow: none;
-o-box-shadow: none;
box-shadow: none;
}
.emphasis,
.c2 {
font-weight: bold;
}
.REPLACEABLE {
font-style: italic;
}
/* Table Styles */
table {
margin-left: 2ex;
}
table.CALSTABLE td,
table.CALSTABLE th,
table.CAUTION td,
table.CAUTION th,
table.WARNING td,
table.WARNING th {
border-style: solid;
}
table.CALSTABLE,
table.CAUTION,
table.WARNING {
border-spacing: 0;
border-collapse: collapse;
}
table.CALSTABLE
{
margin: 2ex 0 2ex 2ex;
background-color: #E0ECEF;
border: 2px solid #A7C6DF;
}
table.CALSTABLE tr:hover td
{
background-color: #EFEFEF;
}
table.CALSTABLE td {
background-color: #FFF;
}
table.CALSTABLE td,
table.CALSTABLE th {
border: 1px solid #A7C6DF;
padding: 0.5ex 0.5ex;
}
table.CAUTION,
table.WARNING {
border-collapse: separate;
display: block;
padding: 0;
max-width: 600px;
}
table.CAUTION {
background-color: #F5F5DC;
border-color: #DEDFA7;
}
table.WARNING {
background-color: #FFD7D7;
border-color: #DF421E;
}
table.CAUTION td,
table.CAUTION th,
table.WARNING td,
table.WARNING th {
border-width: 0;
padding-left: 2ex;
padding-right: 2ex;
}
table.CAUTION td,
table.CAUTION th {
border-color: #F3E4D5
}
table.WARNING td,
table.WARNING th {
border-color: #FFD7D7;
}
td.c1,
td.c2,
td.c3,
td.c4,
td.c5,
td.c6 {
font-size: 1.1em;
font-weight: bold;
border-bottom: 0px solid #FFEFEF;
padding: 1ex 2ex 0;
}
/* Link Styles */
#docNav a {
font-weight: bold;
}
a:link,
a:visited,
a:active,
a:hover {
text-decoration: underline;
}
a:link,
a:active {
color:#0066A2;
}
a:visited {
color:#004E66;
}
a:hover {
color:#000000;
}
#docFooter a:link,
#docFooter a:visited,
#docFooter a:active {
color:#666;
}
#docContainer code.FUNCTION tt {
font-size: 1em;
}

View File

@ -0,0 +1,98 @@
/*
PostgreSQL.org - Global Styles
*/
body {
margin: 0;
padding: 0;
font-family: verdana, sans-serif;
font-size: 69%;
color: #000;
background-color: #fff;
}
h1 {
font-size: 1.4em;
font-weight: bold;
margin-top: 0em;
margin-bottom: 0em;
}
h2 {
font-size: 1.2em;
margin: 1.2em 0em 1.2em 0em;
font-weight: bold;
}
h3 {
font-size: 1.0em;
margin: 1.2em 0em 1.2em 0em;
font-weight: bold;
}
h4 {
font-size: 0.95em;
margin: 1.2em 0em 1.2em 0em;
font-weight: normal;
}
h5 {
font-size: 0.9em;
margin: 1.2em 0em 1.2em 0em;
font-weight: normal;
}
h6 {
font-size: 0.85em;
margin: 1.2em 0em 1.2em 0em;
font-weight: normal;
}
img {
border: 0;
}
ol, ul, li {/*
list-style: none;*/
font-size: 1.0em;
line-height: 1.2em;
margin-top: 0.2em;
margin-bottom: 0.1em;
}
p {
font-size: 1.0em;
line-height: 1.2em;
margin: 1.2em 0em;
}
td p {
margin: 0em 0em 1.2em;
}
li > p {
margin-top: 0.2em;
}
pre {
font-family: monospace;
font-size: 1.0em;
}
div#pgContentWrap code {
font-size: 1.2em;
padding: 1em;
margin: 2ex 0 2ex 2ex;
background: #F7F7F7;
border: 1px solid #CFCFCF;
-moz-border-radius: 8px;
-webkit-border-radius: 8px;
-khtml-border-radius: 8px;
border-radius: 8px;
display: block;
overflow: auto;
}
strong, b {
font-weight: bold;
}

View File

@ -0,0 +1,80 @@
.highlight {
border-color: #CFCFCF;
background-color: #F7F7F7;
color: black;
border-width: 1px;
border-style: solid;
padding: 2ex;
margin: 2ex 0 2ex 2ex;
overflow: auto;
-moz-border-radius: 8px;
-webkit-border-radius: 8px;
-khtml-border-radius: 8px;
border-radius: 8px;
-moz-box-shadow: 3px 3px 5px #DFDFDF;
-webkit-box-shadow: 3px 3px 5px #DFDFDF;
-khtml-box-shadow: 3px 3px 5px #DFDFDF;
-o-box-shadow: 3px 3px 5px #DFDFDF;
box-shadow: 3px 3px 5px #DFDFDF;
}
.highlight .py { color: #545454; }
.highlight .n { color: #3A3A3A; }
.highlight .c { color: #999988; font-style: italic } /* Comment */
.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
.highlight .k { font-weight: bold; } /* Keyword */
.highlight .o { font-weight: bold; } /* Operator */
.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */
.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */
.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */
.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */
.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */
.highlight .ge { font-style: italic } /* Generic.Emph */
.highlight .gr { color: #aa0000 } /* Generic.Error */
.highlight .gh { color: #999999 } /* Generic.Heading */
.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */
.highlight .go { color: #888888 } /* Generic.Output */
.highlight .gp { color: #555555 } /* Generic.Prompt */
.highlight .gs { font-weight: bold } /* Generic.Strong */
.highlight .gu { color: #aaaaaa } /* Generic.Subheading */
.highlight .gt { color: #aa0000 } /* Generic.Traceback */
.highlight .kc { font-weight: bold } /* Keyword.Constant */
.highlight .kd { font-weight: bold } /* Keyword.Declaration */
.highlight .kp { font-weight: bold } /* Keyword.Pseudo */
.highlight .kr { font-weight: bold } /* Keyword.Reserved */
.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */
.highlight .m { color: #009999 } /* Literal.Number */
.highlight .s { color: #d14 } /* Literal.String */
.highlight .na { color: #008080 } /* Name.Attribute */
.highlight .nb { color: #0086B3 } /* Name.Builtin */
.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */
.highlight .no { color: #008080 } /* Name.Constant */
.highlight .ni { color: #800080 } /* Name.Entity */
.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */
.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */
.highlight .nn { color: #555555 } /* Name.Namespace */
.highlight .nt { color: #000080 } /* Name.Tag */
.highlight .nv { color: #008080 } /* Name.Variable */
.highlight .ow { font-weight: bold } /* Operator.Word */
.highlight .w { color: #bbbbbb } /* Text.Whitespace */
.highlight .mf { color: #009999 } /* Literal.Number.Float */
.highlight .mh { color: #009999 } /* Literal.Number.Hex */
.highlight .mi { color: #009999 } /* Literal.Number.Integer */
.highlight .mo { color: #009999 } /* Literal.Number.Oct */
.highlight .sb { color: #d14 } /* Literal.String.Backtick */
.highlight .sc { color: #d14 } /* Literal.String.Char */
.highlight .sd { color: #d14 } /* Literal.String.Doc */
.highlight .s2 { color: #d14 } /* Literal.String.Double */
.highlight .se { color: #d14 } /* Literal.String.Escape */
.highlight .sh { color: #d14 } /* Literal.String.Heredoc */
.highlight .si { color: #d14 } /* Literal.String.Interpol */
.highlight .sx { color: #d14 } /* Literal.String.Other */
.highlight .sr { color: #009926 } /* Literal.String.Regex */
.highlight .s1 { color: #d14 } /* Literal.String.Single */
.highlight .ss { color: #990073 } /* Literal.String.Symbol */
.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */
.highlight .vc { color: #008080 } /* Name.Variable.Class */
.highlight .vg { color: #008080 } /* Name.Variable.Global */
.highlight .vi { color: #008080 } /* Name.Variable.Instance */
.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */

View File

@ -0,0 +1,107 @@
/*
PostgreSQL.org - Table Styles
*/
div.tblBasic h2 {
margin: 25px 0 .5em 0;
}
div.tblBasic table {
background: #F5F5F5 url(../img/layout/nav_tbl_top_lft.png) top left no-repeat;
margin-left: 2ex;
margin-bottom: 15px;
}
div.tblBasic table th {
padding-top: 20px;
border-bottom: 1px solid #F0F8FF;
vertical-align: bottom;
}
div.tblBasic table td {
border-bottom: 1px solid #EFEFEF;
}
div.tblBasic table th,
div.tblBasic table td {
padding: 8px 11px;
color: #555555;
}
div.tblBasic table td.indented {
text-indent: 30px;
}
div.tblBasic table.tblCompact td {
padding: 3px 3px;
}
div.tblBasic table tr.lastrow td {
border-bottom: none;
padding-bottom: 13px;
}
div.tblBasic table.tblCompact tr.lastrow td {
padding-bottom: 3px;
}
div.tblBasic table tr.lastrow td.colFirstT,
div.tblBasic table tr.lastrow td.colFirst {
background: url(../img/layout/nav_tbl_btm_lft.png) bottom left no-repeat;
}
div.tblBasic table.tblBasicGrey th.colLast,
div.tblBasic table.tblCompact th.colLast {
background: #F5F5F5 url(../img/layout/nav_tbl_top_rgt.png) top right no-repeat;
}
div.tblBasic table.tblBasicGrey tr.lastrow td.colLastT,
div.tblBasic table.tblBasicGrey tr.lastrow td.colLast,
div.tblBasic table.tblCompact tr.lastrow td.colLast,
div.tblBasic table.tblCompact tr.lastrow td.colLastT{
background: #F5F5F5 url(../img/layout/nav_tbl_btm_rgt.png) bottom right no-repeat;
}
div.tblBasic table.tblBasicGrey tr.firstrow td.colLastT,
div.tblBasic table.tblBasicGrey tr.firstrow td.colLast,
div tblBasic table.tblCompact tr.firstrow td.colLast {
background: #F5F5F5 url(../img/layout/nav_tbl_top_rgt.png) top right no-repeat;
}
div.tblBasic table th.colMid,
div.tblBasic table td.colMid,
div.tblBasic table th.colLast,
div.tblBasic table td.colLast {
background-color: #F5F5F5 ;
}
div.tblBasic table th.colLastC,
div.tblBasic table td.colFirstC,
div.tblBasic table td.colLastC {
text-align: center;
}
div.tblBasic table th.colLastR,
div.tblBasic table td.colFirstR,
div.tblBasic table td.colLastR {
text-align: right;
}
div.tblBasic table td.colFirstT,
div.tblBasic table td.colMidT,
div.tblBasic table td.colLastT {
vertical-align: top;
}
div.tblBasic table th.colLastRT,
div.tblBasic table td.colFirstRT,
div.tblBasic table td.colLastRT {
text-align: right;
vertical-align: top;
}
div.tblBasic table.tblBasicWhite th {
background-color: aliceblue;
}
div.tblBasic table.tblBasicWhite td {
background-color: white;
}

View File

@ -0,0 +1,162 @@
/*
PostgreSQL.org - Text Styles
*/
/* Heading Definitions */
h1 {
color: #EC5800;
}
h2 {
color: #666;
}
h3 {
color: #666;
}
h4 {
color: #666;
}
/* Text Styles */
.txtColumn1 {
width: 50%;
line-height: 1.3em;
}
.txtColumn2 {
width: 50%;
line-height: 1.5em;
}
.txtCurrentLocation {
font-weight: bold;
}
.txtDivider {
font-size: 0.8em;
color: #E1E1E1;
padding-left: 4px;
padding-right: 4px;
}
.txtNewsEvent {
font-size: 0.9em;
color: #0094C7;
}
.txtDate {
font-size: 0.9em;
color: #666;
}
.txtMediumGrey {
color: #666;
}
.txtFormLabel {
color: #666;
font-weight: bold;
text-align: right;
vertical-align: top;
}
.txtRequiredField {
color: #EC5800;
}
.txtImportant {
color: #EC5800;
}
.txtOffScreen {
position: absolute;
left: -1999px;
width: 1990px;
}
#txtFrontFeatureHeading {
padding-bottom: 1.1em;
}
#txtFrontFeatureLink a {
font-size: 1.2em;
font-weight: bold;
padding-left: 5px;
}
#txtFrontUserText {
font-size: 1.0em;
color: #666;
margin-top: 12px;
}
#txtFrontUserName {
font-size: 0.9em;
color: #666;
margin-top: 9px;
font-weight: bold;
}
#txtFrontUserLink {
font-size: 0.9em;
color: #666;
margin-top: 11px;
margin-left: 1px;
}
#txtFrontUserLink img {
padding-right: 5px;
}
#txtFrontSupportUsText {
font-size: 1.0em;
margin-top: 9px;
}
#txtFrontSupportUsLink {
font-size: 0.9em;
margin-top: 6px;
}
#txtFrontSupportUsLink img {
padding-right: 7px;
}
/* Link Styles */
a:link { color:#0085B0; text-decoration: underline; }
a:visited { color:#004E66; text-decoration: underline; }
a:active { color:#0085B0; text-decoration: underline; }
a:hover { color:#000000; text-decoration: underline; }
#pgFooter a:link { color:#666; text-decoration: underline; }
#pgFooter a:visited { color:#666; text-decoration: underline; }
#pgFooter a:active { color:#666; text-decoration: underline; }
#pgFooter a:hover { color:#000000; text-decoration: underline; }
#txtFrontUserName a:link { color:#666; text-decoration: underline; }
#txtFrontUserName a:visited { color:#666; text-decoration: underline; }
#txtFrontUserName a:active { color:#666; text-decoration: underline; }
#txtFrontUserName a:hover { color:#000; text-decoration: underline; }
#txtArchives a:visited { color:#00536E; text-decoration: underline; }
#txtArchives pre { word-wrap: break-word; font-size: 150%; }
#txtArchives tt { word-wrap: break-word; font-size: 150%; }
#pgFrontUSSContainer h2, #pgFrontUSSContainer h3 {
margin: 0;
padding: 0;
}
#pgFrontNewsEventsContainer h2, #pgFrontNewsEventsContainer h3 {
margin: 0;
padding: 0;
}
#pgFrontNewsEventsContainer h3 img {
margin-bottom: 10px;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 173 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 B

View File

@ -0,0 +1,19 @@
---
layout: default_docs
title: Escape for outer joins
header: Chapter 8. JDBC escapes
resource: media
previoustitle: Chapter 8. JDBC escapes
previous: escapes.html
nexttitle: Date-time escapes
next: escapes-datetime.html
---
You can specify outer joins using the following syntax: `{oj table (LEFT|RIGHT|FULL) OUTER JOIN (table | outer-join)
ON search-condition }`
For example :
```java
rs = stmt.executeQuery( "select * from {oj a left outer join b on (a.i=b.i)} ");
```

View File

@ -0,0 +1,25 @@
---
layout: default_docs
title: Preparing the Database Server for JDBC
header: Chapter 2. Setting up the JDBC Driver
resource: media
previoustitle: Setting up the Class Path
previous: classpath.html
nexttitle: Creating a Database
next: your-database.html
---
Because Java does not support using unix sockets the PostgreSQL™ server must be
configured to allow TCP/IP connections. Starting with server version 8.0 TCP/IP
connections are allowed from `localhost`. To allow connections to other interfaces
than the loopback interface, you must modify the `postgresql.conf` file's `listen_addresses`
setting.
For server versions prior to 8.0 the server does not listen on any interface by
default, and you must set `tcpip_socket = true` in the `postgresql.conf` file.
Once you have made sure the server is correctly listening for TCP/IP connections
the next step is to verify that users are allowed to connect to the server. Client
authentication is setup in `pg_hba.conf`. Refer to the main PostgreSQL™ documentation
for details. The JDBC driver supports the `trust`, `ident`, `password`, `md5`, and
`crypt` authentication methods.

View File

@ -0,0 +1,123 @@
---
layout: default_docs
title: Chapter 5. Issuing a Query and Processing the Result
header: Chapter 5. Issuing a Query and Processing the Result
resource: media
previoustitle: Custom SSLSocketFactory
previous: ssl-factory.html
nexttitle: Using the Statement or PreparedStatement Interface
next: statement.html
---
**Table of Contents**
* [Getting results based on a cursor](query.html#query-with-cursor)
* [Using the `Statement` or `PreparedStatement` Interface](statement.html)
* [Using the `ResultSet` Interface](resultset.html)
* [Performing Updates](update.html)
* [Creating and Modifying Database Objects](ddl.html)
* [Using Java 8 Date and Time classes](java8-date-time.html)
Any time you want to issue SQL statements to the database, you require a `Statement`
or `PreparedStatement` instance. Once you have a `Statement` or `PreparedStatement`,
you can use issue a query. This will return a `ResultSet` instance, which contains
the entire result (see the section called [“Getting results based on a cursor”](query.html#query-with-cursor)
here for how to alter this behaviour). [Example 5.1, “Processing a Simple Query in JDBC”](query.html#query-example)
illustrates this process.
<a name="query-example"></a>
**Example 5.1. Processing a Simple Query in JDBC**
This example will issue a simple query and print out the first column of each
row using a `Statement`.
```java
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery("SELECT * FROM mytable WHERE columnfoo = 500");
while (rs.next())
{
System.out.print("Column 1 returned ");
System.out.println(rs.getString(1));
}
rs.close();
st.close();
```
This example issues the same query as before but uses a `PreparedStatement` and
a bind value in the query.
```java
int foovalue = 500;
PreparedStatement st = conn.prepareStatement("SELECT * FROM mytable WHERE columnfoo = ?");
st.setInt(1, foovalue);
ResultSet rs = st.executeQuery();
while (rs.next())
{
System.out.print("Column 1 returned ");
System.out.println(rs.getString(1));
}
rs.close();
st.close();
```
<a name="query-with-cursor"></a>
# Getting results based on a cursor
By default the driver collects all the results for the query at once. This can
be inconvenient for large data sets so the JDBC driver provides a means of basing
a `ResultSet` on a database cursor and only fetching a small number of rows.
A small number of rows are cached on the client side of the connection and when
exhausted the next block of rows is retrieved by repositioning the cursor.
### Note
> Cursor based `ResultSets` cannot be used in all situations. There a number of
restrictions which will make the driver silently fall back to fetching the
whole `ResultSet` at once.
* The connection to the server must be using the V3 protocol. This is the default
for (and is only supported by) server versions 7.4 and later.
* The `Connection` must not be in autocommit mode. The backend closes cursors at
the end of transactions, so in autocommit mode the backend will have
closed the cursor before anything can be fetched from it.
*The `Statement` must be created with a `ResultSet` type of `ResultSet.TYPE_FORWARD_ONLY`.
This is the default, so no code will need to be rewritten to take advantage
of this, but it also means that you cannot scroll backwards or otherwise
jump around in the `ResultSet`.
* The query given must be a single statement, not multiple statements strung
together with semicolons.
<a name="fetchsize-example"></a>
**Example 5.2. Setting fetch size to turn cursors on and off.**
Changing code to cursor mode is as simple as setting the fetch size of the
`Statement` to the appropriate size. Setting the fetch size back to 0 will cause
all rows to be cached (the default behaviour).
```java
// make sure autocommit is off
conn.setAutoCommit(false);
Statement st = conn.createStatement();
// Turn use of the cursor on.
st.setFetchSize(50);
ResultSet rs = st.executeQuery("SELECT * FROM mytable");
while (rs.next())
{
System.out.print("a row was returned.");
}
rs.close();
// Turn the cursor off.
st.setFetchSize(0);
rs = st.executeQuery("SELECT * FROM mytable");
while (rs.next())
{
System.out.print("many rows were returned.");
}
rs.close();
// Close the statement.
st.close();
```

View File

@ -0,0 +1,18 @@
---
layout: default_docs
title: Chapter 13. Further Reading
header: Chapter 13. Further Reading
resource: media
previoustitle: Chapter 12. Logging with java.util.logging
previous: logging.html
nexttitle: The PostgreSQL™ JDBC Interface
next: index.html
---
If you have not yet read it, you are advised you read the JDBC API Documentation
(supplied with Oracle's JDK) and the JDBC Specification. Both are available from
[http://www.oracle.com/technetwork/java/javase/jdbc/index.html](http://www.oracle.com/technetwork/java/javase/jdbc/index.html).
[http://jdbc.postgresql.org/index.html](http://jdbc.postgresql.org/index.html)
contains updated information not included in this manual including Javadoc class
documentation and a FAQ. Additionally it offers precompiled drivers.

View File

@ -0,0 +1,363 @@
---
layout: default_docs
title: Physical and Logical replication API
header: Chapter 9. PostgreSQL™ Extensions to the JDBC API
resource: media
previoustitle: Server Prepared Statements
previous: server-prepare.html
nexttitle: Arrays
next: arrays.html
---
**Table of Contents**
* [Overview](replication.html#overview)
* [Logical replication](replication.html#logical-replication)
* [Physical replication](replication.html#physical-replication)
<a name="overview"></a>
# Overview
Postgres 9.4 (released in December 2014) introduced a new feature called logical replication. Logical replication allows
changes from a database to be streamed in real-time to an external system. The difference between physical replication and
logical replication is that logical replication sends data over in a logical format whereas physical replication sends data over in a binary format. Additionally logical replication can send over a single table, or database. Binary replication replicates the entire cluster in an all or nothing fashion; which is to say there is no way to get a specific table or database using binary replication
Prior to logical replication keeping an external system synchronized in real time was problematic. The application would have to update/invalidate the appropriate cache entries, reindex the data in your search engine, send it to your analytics system, and so on.
This suffers from race conditions and reliability problems. For example if slightly different data gets written to two different datastores (perhaps due to a bug or a race condition),the contents of the datastores will gradually drift apart — they will become more and more inconsistent over time. Recovering from such gradual data corruption is difficult.
Logical decoding takes the database’s write-ahead log (WAL), and gives us access to row-level change events:
every time a row in a table is inserted, updated or deleted, that’s an event. Those events are grouped by transaction,
and appear in the order in which they were committed to the database. Aborted/rolled-back transactions
do not appear in the stream. Thus, if you apply the change events in the same order, you end up with an exact,
transactionally consistent copy of the database. It's looks like the Event Sourcing pattern that you previously implemented
in your application, but now it's available out of the box from the PostgreSQL database.
For access to real-time changes PostgreSQL provides the streaming replication protocol. Replication protocol can be physical or logical. Physical replication protocol is used for Master/Secondary replication. Logical replication protocol can be used
to stream changes to an external system.
Since the JDBC API does not include replication `PGConnection` implements the PostgreSQL API
## Configure database
Your database should be configured to enable logical or physical replication
### postgresql.conf
* Property `max_wal_senders` should be at least equal to the number of replication consumers
* Property `wal_keep_segments` should contain count wal segments that can't be removed from database.
* Property `wal_level` for logical replication should be equal to `logical`.
* Property `max_replication_slots` should be greater than zero for logical replication, because logical replication can't
work without replication slot.
### pg_hba.conf
Enable connect user with replication privileges to replication stream.
```
local replication all trust
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
```
### Configuration for examples
*postgresql.conf*
```ini
max_wal_senders = 4 # max number of walsender processes
wal_keep_segments = 4 # in logfile segments, 16MB each; 0 disables
wal_level = logical # minimal, replica, or logical
max_replication_slots = 4 # max number of replication slots
```
*pg_hba.conf*
```
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
```
<a name="logical-replication"></a>
# Logical replication
Logical replication uses a replication slot to reserve WAL logs on the server and also defines which decoding plugin to use to decode the WAL logs to the required format, for example you can decode changes as json, protobuf, etc. To demonstrate how to use the pgjdbc replication API we will use the `test_decoding` plugin that is include in the `postgresql-contrib` package, but you can use your own decoding plugin. There are a few on github which can be used as examples.
In order to use the replication API, the Connection has to be created in replication mode, in this mode the connection is not available to
execute SQL commands, and can only be used with replication API. This is a restriction imposed by PostgreSQL.
**Example 9.4. Create replication connection.**
```java
String url = "jdbc:postgresql://localhost:5432/postgres";
Properties props = new Properties();
PGProperty.USER.set(props, "postgres");
PGProperty.PASSWORD.set(props, "postgres");
PGProperty.ASSUME_MIN_SERVER_VERSION.set(props, "9.4");
PGProperty.REPLICATION.set(props, "database");
PGProperty.PREFER_QUERY_MODE.set(props, "simple");
Connection con = DriverManager.getConnection(url, props);
PGConnection replConnection = con.unwrap(PGConnection.class);
```
The entire replication API is grouped in `org.postgresql.replication.PGReplicationConnection` and is available
via `org.postgresql.PGConnection#getReplicationAPI`.
Before you can start replication protocol, you need to have replication slot, which can be also created via pgjdbc API.
**Example 9.5. Create replication slot via pgjdbc API**
```java
replConnection.getReplicationAPI()
.createReplicationSlot()
.logical()
.withSlotName("demo_logical_slot")
.withOutputPlugin("test_decoding")
.make();
```
Once we have the replication slot, we can create a ReplicationStream.
**Example 9.6. Create logical replication stream.**
```java
PGReplicationStream stream =
replConnection.getReplicationAPI()
.replicationStream()
.logical()
.withSlotName("demo_logical_slot")
.withSlotOption("include-xids", false)
.withSlotOption("skip-empty-xacts", true)
.start();
```
The replication stream will send all changes since the creation of the replication slot or from replication slot
restart LSN if the slot was already used for replication. You can also start streaming changes from a particular LSN position,in that case LSN position should be specified when you create the replication stream.
**Example 9.7. Create logical replication stream from particular position.**
```java
LogSequenceNumber waitLSN = LogSequenceNumber.valueOf("6F/E3C53568");
PGReplicationStream stream =
replConnection.getReplicationAPI()
.replicationStream()
.logical()
.withSlotName("demo_logical_slot")
.withSlotOption("include-xids", false)
.withSlotOption("skip-empty-xacts", true)
.withStartPosition(waitLSN)
.start();
```
Via `withSlotOption` we also can specify options that will be sent to our output plugin, this allows customize decoding.
For example I have my own output plugin that has a property `sensitive=true` which will include changes by sensitive columns to change
event.
**Example 9.8. Example output with include-xids=true**
```
BEGIN 105779
table public.test_logic_table: INSERT: pk[integer]:1 name[character varying]:'previous value'
COMMIT 105779
```
**Example 9.9. Example output with include-xids=false**
```
BEGIN
table public.test_logic_table: INSERT: pk[integer]:1 name[character varying]:'previous value'
COMMIT
```
During replication the database and consumer periodically exchange ping messages. When the database or client do not receive
ping message within the configured timeout, replication has been deemed to have stopped and an exception will be thrown and the database will free resources. In PostgreSQL the ping timeout is configured by the property `wal_sender_timeout` (default = 60 seconds).
Replication stream in pgjdc can be configured to send feedback(ping) when required or by time interval.
It is recommended to send feedback(ping) to the database more often than configured `wal_sender_timeout`. In production I use value equal to `wal_sender_timeout / 3`. It's avoids a potential problems with networks and changes to be streamed without disconnects by timeout. To specify the feedback interval use `withStatusInterval` method.
**Example 9.10. Replication stream with configured feedback interval equal to 20 sec**
```java
PGReplicationStream stream =
replConnection.getReplicationAPI()
.replicationStream()
.logical()
.withSlotName("demo_logical_slot")
.withSlotOption("include-xids", false)
.withSlotOption("skip-empty-xacts", true)
.withStatusInterval(20, TimeUnit.SECONDS)
.start();
```
After create `PGReplicationStream`, it's time to start receive changes in real-time. Changes can be received from
stream as blocking(`org.postgresql.replication.PGReplicationStream#read`)
or as non-blocking(`org.postgresql.replication.PGReplicationStream#readPending`).
Both methods receive changes as a `java.nio.ByteBuffer` with the payload from the send output plugin. We can't receive
part of message, only the full message that was sent by the output plugin. ByteBuffer contains message in format that is defined by the decoding output plugin, it can be simple String, json, or whatever the plugin determines. That why pgjdbc returns the raw ByteBuffer instead of making assumptions.
**Example 9.11. Example send message from output plugin.**
```
OutputPluginPrepareWrite(ctx, true);
appendStringInfo(ctx->out, "BEGIN %u", txn->xid);
OutputPluginWrite(ctx, true);
```
**Example 9.12. Receive changes via replication stream.**
```java
while (true) {
//non blocking receive message
ByteBuffer msg = stream.readPending();
if (msg == null) {
TimeUnit.MILLISECONDS.sleep(10L);
continue;
}
int offset = msg.arrayOffset();
byte[] source = msg.array();
int length = source.length - offset;
System.out.println(new String(source, offset, length));
}
```
As mentioned previously, replication stream should periodically send feedback to the database to prevent disconnect via
timeout. Feedback is automatically sent when `read` or `readPending` are called if it's time to send feedback. Feedback can also be sent via `org.postgresql.replication.PGReplicationStream#forceUpdateStatus()` regardless of the timeout. Another important duty of feedback is to provide the server with the Logial Sequence Number (LSN) that has been successfully received and applied to consumer, it is necessary for monitoring and to truncate/archive WAL's that that are no longer needed. In the event that replication has been restarted, it's will start from last successfully processed LSN that was sent via feedback to database.
The API provides the following feedback mechanism to indicate the successfully applied LSN by the current consumer. LSN's before this can be truncated or archived.
`org.postgresql.replication.PGReplicationStream#setFlushedLSN` and
`org.postgresql.replication.PGReplicationStream#setAppliedLSN`. You always can get last receive LSN via
`org.postgresql.replication.PGReplicationStream#getLastReceiveLSN`.
**Example 9.13. Add feedback indicating a successfully process LSN**
```java
while (true) {
//Receive last successfully send to queue message. LSN ordered.
LogSequenceNumber successfullySendToQueue = getQueueFeedback();
if (successfullySendToQueue != null) {
stream.setAppliedLSN(successfullySendToQueue);
stream.setFlushedLSN(successfullySendToQueue);
}
//non blocking receive message
ByteBuffer msg = stream.readPending();
if (msg == null) {
TimeUnit.MILLISECONDS.sleep(10L);
continue;
}
asyncSendToQueue(msg, stream.getLastReceiveLSN());
}
```
**Example 9.14. Full example of logical replication**
```java
String url = "jdbc:postgresql://localhost:5432/test";
Properties props = new Properties();
PGProperty.USER.set(props, "postgres");
PGProperty.PASSWORD.set(props, "postgres");
PGProperty.ASSUME_MIN_SERVER_VERSION.set(props, "9.4");
PGProperty.REPLICATION.set(props, "database");
PGProperty.PREFER_QUERY_MODE.set(props, "simple");
Connection con = DriverManager.getConnection(url, props);
PGConnection replConnection = con.unwrap(PGConnection.class);
replConnection.getReplicationAPI()
.createReplicationSlot()
.logical()
.withSlotName("demo_logical_slot")
.withOutputPlugin("test_decoding")
.make();
//some changes after create replication slot to demonstrate receive it
sqlConnection.setAutoCommit(true);
Statement st = sqlConnection.createStatement();
st.execute("insert into test_logic_table(name) values('first tx changes')");
st.close();
st = sqlConnection.createStatement();
st.execute("update test_logic_table set name = 'second tx change' where pk = 1");
st.close();
st = sqlConnection.createStatement();
st.execute("delete from test_logic_table where pk = 1");
st.close();
PGReplicationStream stream =
replConnection.getReplicationAPI()
.replicationStream()
.logical()
.withSlotName("demo_logical_slot")
.withSlotOption("include-xids", false)
.withSlotOption("skip-empty-xacts", true)
.withStatusInterval(20, TimeUnit.SECONDS)
.start();
while (true) {
//non blocking receive message
ByteBuffer msg = stream.readPending();
if (msg == null) {
TimeUnit.MILLISECONDS.sleep(10L);
continue;
}
int offset = msg.arrayOffset();
byte[] source = msg.array();
int length = source.length - offset;
System.out.println(new String(source, offset, length));
//feedback
stream.setAppliedLSN(stream.getLastReceiveLSN());
stream.setFlushedLSN(stream.getLastReceiveLSN());
}
```
Where output looks like this, where each line is a separate message.
```
BEGIN
table public.test_logic_table: INSERT: pk[integer]:1 name[character varying]:'first tx changes'
COMMIT
BEGIN
table public.test_logic_table: UPDATE: pk[integer]:1 name[character varying]:'second tx change'
COMMIT
BEGIN
table public.test_logic_table: DELETE: pk[integer]:1
COMMIT
```
<a name="physical-replication"></a>
# Physical replication
API for physical replication looks like the API for logical replication. Physical replication does not require a replication
slot. And ByteBuffer will contain the binary form of WAL logs. The binary WAL format is a very low level API, and can change from version to version. That is why replication between different major PostgreSQL versions is not possible. But physical replication can contain many important data, that is not available via logical replication. That is why pgjdc contains an implementation for both.
**Example 9.15. Use physical replication**
```java
LogSequenceNumber lsn = getCurrentLSN();
Statement st = sqlConnection.createStatement();
st.execute("insert into test_physic_table(name) values('previous value')");
st.close();
PGReplicationStream stream =
pgConnection
.getReplicationAPI()
.replicationStream()
.physical()
.withStartPosition(lsn)
.start();
ByteBuffer read = stream.read();
```

View File

@ -0,0 +1,19 @@
---
layout: default_docs
title: Using the ResultSet Interface
header: Chapter 5. Issuing a Query and Processing the Result
resource: media
previoustitle: Using the Statement or PreparedStatement Interface
previous: statement.html
nexttitle: Performing Updates
next: update.html
---
The following must be considered when using the `ResultSet` interface:
* Before reading any values, you must call `next()`. This returns true if there
is a result, but more importantly, it prepares the row for processing.
* You must close a `ResultSet` by calling `close()` once you have finished using
it.
* Once you make another query with the `Statement` used to create a `ResultSet`,
the currently open `ResultSet` instance is closed automatically.

View File

@ -0,0 +1,303 @@
---
layout: default_docs
title: Server Prepared Statements
header: Chapter 9. PostgreSQL™ Extensions to the JDBC API
resource: media
previoustitle: Listen / Notify
previous: listennotify.html
nexttitle: Physical and Logical replication API
next: replication.html
---
### Motivation
The PostgreSQL™ server allows clients to compile sql statements that are expected
to be reused to avoid the overhead of parsing and planning the statement for every
execution. This functionality is available at the SQL level via PREPARE and EXECUTE
beginning with server version 7.3, and at the protocol level beginning with server
version 7.4, but as Java developers we really just want to use the standard
`PreparedStatement` interface.
> PostgreSQL 9.2 release notes: prepared statements used to be optimized once, without any knowledge
of the parameters' values. With 9.2, the planner will use specific plans regarding to the parameters
sent (the query will be planned at execution), except if the query is executed several times and
the planner decides that the generic plan is not too much more expensive than the specific plans.
Server side prepared statements can improve execution speed as
1. It sends just statement handle (e.g. `S_1`) instead of full SQL text
1. It enables use of binary transfer (e.g. binary int4, binary timestamps, etc); the parameters and results are much faster to parse
1. It enables the reuse server-side execution plan
1. The client can reuse result set column definition, so it does not have to receive and parse metadata on each execution
### Activation
> Previous versions of the driver used PREPARE and EXECUTE to implement
server-prepared statements. This is supported on all server versions beginning
with 7.3, but produced application-visible changes in query results, such as
missing ResultSet metadata and row update counts. The current driver uses the V3
protocol-level equivalents which avoid these changes in query results.
The driver uses server side prepared statements **by default** when `PreparedStatement` API is used.
In order to get to server-side prepare, you need to execute the query 5 times (that can be
configured via `prepareThreshold` connection property).
An internal counter keeps track of how many times the statement has been executed and when it
reaches the threshold it will start to use server side prepared statements.
It is generally a good idea to reuse the same `PreparedStatement` object for performance reasons,
however the driver is able to server-prepare statements automatically across `connection.prepareStatement(...)` calls.
For instance:
PreparedStatement ps = con.prepareStatement("select /*test*/ ?::int4");
ps.setInt(1, 42);
ps.executeQuery().close();
ps.close();
PreparedStatement ps = con.prepareStatement("select /*test*/ ?::int4");
ps.setInt(1, 43);
ps.executeQuery().close();
ps.close();
is less efficient than
PreparedStatement ps = con.prepareStatement("select /*test*/ ?::int4");
ps.setInt(1, 42);
ps.executeQuery().close();
ps.setInt(1, 43);
ps.executeQuery().close();
however pgjdbc can use server side prepared statements in both cases.
Note: the `Statement` object is bound to a `Connection`, and it is not a good idea to access the same
`Statement` and/or `Connection` from multiple concurrent threads (except `cancel()`, `close()`, and alike cases). It might be safer to just `close()` the statement rather than trying to cache it somehow.
Server-prepared statements consume memory both on the client and the server, so pgjdbc limits the number
of server-prepared statements per connection. It can be configured via `preparedStatementCacheQueries`
(default `256`, the number of queries known to pgjdbc), and `preparedStatementCacheSizeMiB` (default `5`,
that is the client side cache size in megabytes per connection). Only a subset of `statement cache` is
server-prepared as some of the statements might fail to reach `prepareThreshold`.
### Deactivation
There might be cases when you would want to disable use of server-prepared statements.
For instance, if you route connections through a balancer that is incompatible with server-prepared statements,
you have little choice.
You can disable usage of server side prepared statements by setting `prepareThreshold=0`
### Corner cases
#### DDL
V3 protocol avoids sending column metadata on each execution, and BIND message specifies output column format.
That creates a problem for cases like
SELECT * FROM mytable;
ALTER mytable ADD column ...;
SELECT * FROM mytable;
That results in `cached plan must not change result type` error, and it causes the transaction to fail.
The recommendation is:
1. Use explicit column names in the SELECT list
1. Avoid column type alters
#### DEALLOCATE ALL, DISCARD ALL
There are explicit commands to deallocate all server side prepared statements. It would result in
the following server-side error message: `prepared statement name is invalid`.
Of course it could defeat pgjdbc, however there are cases when you need to discard statements (e.g. after lots of DDLs)
The recommendation is:
1. Use simple `DEALLOCATE ALL` and/or `DISCARD ALL` commands, avoid nesting the commands into pl/pgsql or alike. The driver does understand top-level DEALLOCATE/DISCARD commands, and it invalidates client-side cache as well
1. Reconnect. The cache is per connection, so it would get invalidated if you reconnect
#### set search_path=...
PostgreSQL allows to customize `search_path`, and it provides great power to the developer.
With great power the following case could happen:
set search_path='app_v1';
SELECT * FROM mytable;
set search_path='app_v2';
SELECT * FROM mytable; -- Does mytable mean app_v1.mytable or app_v2.mytable here?
Server side prepared statements are linked to database object IDs, so it could fetch data from "old"
`app_v1.mytable` table. It is hard to tell which behaviour is expected, however pgjdbc tries to track
`search_path` changes, and it invalidates prepare cache accordingly.
The recommendation is:
1. Avoid changing `search_path` often, as it invalidates server side prepared statements
1. Use simple `set search_path...` commands, avoid nesting the comands into pl/pgsql or alike, otherwise
pgjdbc won't be able to identify `search_path` change
#### Re-execution of failed statements
It is a pity that a single `cached plan must not change result type` could cause the whole transaction to fail.
The driver could re-execute the statement automatically in certain cases.
1. In case the transaction has not failed (e.g. the transaction did not exist before execution of
the statement that caused `cached plan...` error), then pgjdbc re-executes the statement automatically.
This makes the application happy, and avoids unnecessary errors.
1. In case the transaction is in a failed state, there's nothing to do but rollback it. pgjdbc does have
"automatic savepoint" feature, and it could automatically rollback and retry the statement. The behaviour
is controlled via `autosave` property (default `never`). The value of `conservative` would auto-rollback
for the errors related to invalid server-prepared statements.
Note: `autosave` might result in **severe** performance issues for long transactions, as PostgreSQL backend
is not optimized for the case of long transactions and lots of savepoints.
#### Replication connection
PostgreSQL replication connection does not allow to use server side prepared statements, so pgjdbc
uses simple queries in the case where `replication` connection property is activated.
#### Use of server-prepared statements for con.createStatement()
By default, pgjdbc uses server-prepard statements for `PreparedStatement` only, however you might want
to activate server side prepared statements for regular `Statement` as well. For instance, if you
execute the same statement through `con.createStatement().executeQuery(...)`, then you might improve
performance by caching the statement. Of course it is better to use `PreparedStatements` explicitly,
however the driver has an option to cache simple statements as well.
You can do that by setting `preferQueryMode` to `extendedCacheEverything`.
Note: the option is more of a diagnostinc/debugging sort, so be careful how you use it .
#### Bind placeholder datatypes
The database optimizes the execution plan for given parameter types.
Consider the below case:
-- create table rooms (id int4, name varchar);
-- create index name__rooms on rooms(name);
PreparedStatement ps = con.prepareStatement("select id from rooms where name=?");
ps.setString(1, "42");
It works as expected, however what would happen if one uses `setInt` instead?
ps.setInt(1, 42);
Even though the result would be identical, the first variation (`setString` case) enables the database
to use index `name__rooms`, and the latter does not.
In case the database gets `42` as integer, it uses the plan like `where cast(name as int4) = ?`.
The plan has to be specific for the (`SQL text`; `parameter types`) combination, so the driver
has to invalidate server side prepared statements in case the statement is used with different
parameter types.
This gets especially painful for batch operations as you don't want to interrupt the batch
by using alternating datatypes.
The most typical case is as follows (don't ever use this in production):
PreparedStatement ps = con.prepareStatement("select id from rooms where ...");
if (param instanceof String) {
ps.setString(1, param);
} else if (param instanceof Integer) {
ps.setInt(1, ((Integer) param).intValue());
} else {
// Does it really matter which type of NULL to use?
// In fact, it does since data types specify which server-procedure to call
ps.setNull(1, Types.INTEGER);
}
As you might guess, `setString` vs `setNull(..., Types.INTEGER)` result in alternating datatypes,
and it forces the driver to invalidate and re-prepare server side statement.
Recommendation is to use the consistent datatype for each bind placeholder, and use the same type
for `setNull`.
Check out `org.postgresql.test.jdbc2.PreparedStatementTest.testAlternatingBindType` example for more details.
#### Debugging
In case you run into `cached plan must not change result type` or `prepared statement \"S_2\" does not exist`
the following might be helpful to debug the case.
1. Client logging. If you add `loggerLevel=TRACE&loggerFile=pgjdbc-trace.log`, you would get trace
of the messages send between the driver and the backend
1. You might check `org.postgresql.test.jdbc2.AutoRollbackTestSuite` as it verifies lots of combinations
<a name="server-prepared-statement-example"></a>
**Example 9.3. Using server side prepared statements**
```java
import java.sql.*;
public class ServerSidePreparedStatement
{
public static void main(String args[]) throws Exception
{
Class.forName("org.postgresql.Driver");
String url = "jdbc:postgresql://localhost:5432/test";
Connection conn = DriverManager.getConnection(url,"test","");
PreparedStatement pstmt = conn.prepareStatement("SELECT ?");
// cast to the pg extension interface
org.postgresql.PGStatement pgstmt = pstmt.unwrap(org.postgresql.PGStatement.class);
// on the third execution start using server side statements
pgstmt.setPrepareThreshold(3);
for (int i=1; i<=5; i++)
{
pstmt.setInt(1,i);
boolean usingServerPrepare = pgstmt.isUseServerPrepare();
ResultSet rs = pstmt.executeQuery();
rs.next();
System.out.println("Execution: "+i+", Used server side: " + usingServerPrepare + ", Result: "+rs.getInt(1));
rs.close();
}
pstmt.close();
conn.close();
}
}
```
Which produces the expected result of using server side prepared statements upon
the third execution.
```
Execution: 1, Used server side: false, Result: 1
Execution: 2, Used server side: false, Result: 2
Execution: 3, Used server side: true, Result: 3
Execution: 4, Used server side: true, Result: 4
Execution: 5, Used server side: true, Result: 5
```
The example shown above requires the programmer to use PostgreSQL™ specific code
in a supposedly portable API which is not ideal. Also it sets the threshold only
for that particular statement which is some extra typing if we wanted to use that
threshold for every statement. Let's take a look at the other ways to set the
threshold to enable server side prepared statements. There is already a hierarchy
in place above a `PreparedStatement`, the `Connection` it was created from, and
above that the source of the connection be it a `Datasource` or a URL. The server
side prepared statement threshold can be set at any of these levels such that
the value will be the default for all of it's children.
```java
// pg extension interfaces
org.postgresql.PGConnection pgconn;
org.postgresql.PGStatement pgstmt;
// set a prepared statement threshold for connections created from this url
String url = "jdbc:postgresql://localhost:5432/test?prepareThreshold=3";
// see that the connection has picked up the correct threshold from the url
Connection conn = DriverManager.getConnection(url,"test","");
pgconn = conn.unwrap(org.postgresql.PGConnection.class);
System.out.println(pgconn.getPrepareThreshold()); // Should be 3
// see that the statement has picked up the correct threshold from the connection
PreparedStatement pstmt = conn.prepareStatement("SELECT ?");
pgstmt = pstmt.unwrap(org.postgresql.PGStatement.class);
System.out.println(pgstmt.getPrepareThreshold()); // Should be 3
// change the connection's threshold and ensure that new statements pick it up
pgconn.setPrepareThreshold(5);
PreparedStatement pstmt = conn.prepareStatement("SELECT ?");
pgstmt = pstmt.unwrap(org.postgresql.PGStatement.class);
System.out.println(pgstmt.getPrepareThreshold()); // Should be 5
```

View File

@ -0,0 +1,44 @@
---
layout: default_docs
title: Chapter 2. Setting up the JDBC Driver
header: Chapter 2. Setting up the JDBC Driver
resource: media
previoustitle: Chapter 1. Introduction
previous: intro.html
nexttitle: Setting up the Class Path
next: classpath.html
---
**Table of Contents**
* [Getting the Driver](setup.html#build)
* [Setting up the Class Path](classpath.html)
* [Preparing the Database Server for JDBC](prepare.html)
* [Creating a Database](your-database.html)
This section describes the steps you need to take before you can write or run
programs that use the JDBC interface.
<a name="build"></a>
# Getting the Driver
Precompiled versions of the driver can be downloaded from the [PostgreSQL™ JDBC web site](http://jdbc.postgresql.org).
Alternatively you can build the driver from source, but you should only need to
do this if you are making changes to the source code. To build the JDBC driver,
you need Ant 1.5 or higher and a JDK. Ant is a special tool for building Java-based
packages. It can be downloaded from the [Ant web site](http://ant.apache.org/index.html).
If you have several Java compilers installed, it depends on the Ant configuration
which one gets used. Precompiled Ant distributions are typically set up to read
a file `.antrc` in the current user's home directory for configuration. For example,
to use a different JDK than the default, this may work:
`JAVA_HOME=/usr/local/jdk1.6.0_07`
`JAVACMD=$JAVA_HOME/bin/java`
To compile the driver simply run **ant** in the top level directory. The compiled
driver will be placed in `jars/postgresql.jar`. The resulting driver will be built
for the version of Java you are running. If you build with a 1.4 or 1.5 JDK you
will build a version that supports the JDBC 3 specification and if you build with
a 1.6 or higher JDK you will build a version that supports the JDBC 4 specification.

View File

@ -0,0 +1,122 @@
---
layout: default_docs
title: Configuring the Client
header: Chapter 4. Using SSL
resource: media
previoustitle: Chapter 4. Using SSL
previous: ssl.html
nexttitle: Custom SSLSocketFactory
next: ssl-factory.html
---
There are a number of connection parameters for configuring the client for SSL. See [SSL Connection parameters](connect.html#ssl)
The simplest being `ssl=true`, passing this into the driver will cause the driver to validate both
the SSL certificate and verify the hostname (same as `verify-full`). **Note** this is different than
libpq which defaults to a non-validating SSL connection.
In this mode, when establishing a SSL connection the JDBC driver will validate the server's
identity preventing "man in the middle" attacks. It does this by checking that the server
certificate is signed by a trusted authority, and that the host you are connecting to is the
same as the hostname in the certificate.
If you **require** encryption and want the connection to fail if it can't be encrypted then set
`sslmode=require` this ensures that the server is configured to accept SSL connections for this
Host/IP address and that the server recognizes the client certificate. In other words if the server
does not accept SSL connections or the client certificate is not recognized the connection will fail.
**Note** in this mode we will accept all server certificates.
If `sslmode=verify-ca`, the server is verified by checking the certificate chain up to the root
certificate stored on the client.
If `sslmode=verify-full`, the server host name will be verified to make sure it matches the name
stored in the server certificate.
The SSL connection will fail if the server certificate cannot be verified. `verify-full` is recommended
in most security-sensitive environments.
In the case where the certificate validation is failing you can try `sslcert=` and LibPQFactory will
not send the client certificate. If the server is not configured to authenticate using the certificate
it should connect.
The location of the client certificate, client key and root certificate can be overridden with the
`sslcert`, `sslkey`, and `sslrootcert` settings respectively. These default to /defaultdir/postgresql.crt,
/defaultdir/postgresql.pk8, and /defaultdir/root.crt respectively where defaultdir is
${user.home}/.postgresql/ in *nix systems and %appdata%/postgresql/ on windows
Finer control of the SSL connection can be achieved using the `sslmode` connection parameter.
This parameter is the same as the libpq `sslmode` parameter and the currently SSL implements the
following
<div class="tblBasic">
<table class="tblBasicWhite" border="1" summary="SSL Mode Descriptions" cellspacing="0" cellpadding="0">
<thead>
<tr>
<th>sslmode</th><th>Eavesdropping Protection</th><th> MITM Protection</th><th/>
</tr>
</thead>
<tr>
<td>disable</td><td>No</td><td>No</td><td>I don't care about security and don't want to pay the overhead for encryption</td>
</tr>
<tr>
<td>allow</td><td>Maybe</td><td>No</td><td>I don't care about security but will pay the overhead for encryption if the server insists on it</td>
</tr>
<tr>
<td>prefer</td><td>Maybe</td><td>No</td><td>I don't care about encryption but will pay the overhead of encryption if the server supports it</td>
</tr>
<tr>
<td>require</td><td>Yes</td><td>No</td><td>I want my data to be encrypted, and I accept the overhead. I trust that the network will make sure I always connect to the server I want.</td>
</tr>
<tr>
<td>verify-ca</td><td>Yes</td><td>Depends on CA policy</td><td>I want my data encrypted, and I accept the overhead. I want to be sure that I connect to a server that I trust.</td>
</tr>
<tr>
<td>verify-full</td><td>Yes</td><td>Yes</td><td>I want my data encrypted, and I accept the overhead. I want to be sure that I connect to a server I trust, and that it's the one I specify.</td>
</tr>
</table>
</div>
### Note
If you are using Java's default mechanism (not LibPQFactory) to create the SSL connection you will
need to make the server certificate available to Java, the first step is to convert
it to a form Java understands.
`openssl x509 -in server.crt -out server.crt.der -outform der`
From here the easiest thing to do is import this certificate into Java's system
truststore.
`keytool -keystore $JAVA_HOME/lib/security/cacerts -alias postgresql -import -file server.crt.der`
The default password for the cacerts keystore is `changeit`. The alias to postgresql
is not important and you may select any name you desire.
If you do not have access to the system cacerts truststore you can create your
own truststore.
`keytool -keystore mystore -alias postgresql -import -file server.crt.der`
When starting your Java application you must specify this keystore and password
to use.
`java -Djavax.net.ssl.trustStore=mystore -Djavax.net.ssl.trustStorePassword=mypassword com.mycompany.MyApp`
In the event of problems extra debugging information is available by adding
`-Djavax.net.debug=ssl` to your command line.
<a name="nonvalidating"></a>
## Using SSL without Certificate Validation
In some situations it may not be possible to configure your Java environment to
make the server certificate available, for example in an applet. For a large
scale deployment it would be best to get a certificate signed by recognized
certificate authority, but that is not always an option. The JDBC driver provides
an option to establish a SSL connection without doing any validation, but please
understand the risk involved before enabling this option.
A non-validating connection is established via a custom `SSLSocketFactory` class that is provided
with the driver. Setting the connection URL parameter `sslfactory=org.postgresql.ssl.NonValidatingFactory`
will turn off all SSL validation.

View File

@ -0,0 +1,26 @@
---
layout: default_docs
title: Custom SSLSocketFactory
header: Chapter 4. Using SSL
resource: media
previoustitle: Configuring the Client
previous: ssl-client.html
nexttitle: Chapter 5. Issuing a Query and Processing the Result
next: query.html
---
PostgreSQL™ provides a way for developers to customize how a SSL connection is
established. This may be used to provide a custom certificate source or other
extensions by allowing the developer to create their own `SSLContext` instance.
The connection URL parameters `sslfactory` allow the user to specify which custom
class to use for creating the `SSLSocketFactory`. The class name specified by `sslfactory`
must extend ` javax.net.ssl.SSLSocketFactory` and be available to the driver's classloader.
This class must have a zero argument constructor or a single argument constructor preferentially taking
a `Properties` argument. There is a simple `org.postgresql.ssl.DefaultJavaSSLFactory` provided which uses the
default java SSLFactory.
Information on how to actually implement such a class is beyond the scope of this
documentation. Places to look for help are the [JSSE Reference Guide](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html)
and the source to the `NonValidatingFactory` provided by the JDBC driver.

View File

@ -0,0 +1,35 @@
---
layout: default_docs
title: Chapter 4. Using SSL
header: Chapter 4. Using SSL
resource: media
previoustitle: Connecting to the Database
previous: connect.html
nexttitle: Configuring the Client
next: ssl-client.html
---
**Table of Contents**
* [Configuring the Server](ssl.html#ssl-server)
* [Configuring the Client](ssl-client.html)
* [Using SSL without Certificate Validation](ssl-client.html#nonvalidating)
* [Custom SSLSocketFactory](ssl-factory.html)
<a name="ssl-server"></a>
# Configuring the Server
Configuring the PostgreSQL™ server for SSL is covered in the [main
documentation](http://www.postgresql.org/docs/current/static/ssl-tcp.html),
so it will not be repeated here. Before trying to access your SSL enabled
server from Java, make sure you can get to it via **psql**. You should
see output like the following if you have established a SSL connnection.
```
$ ./bin/psql -h localhost -U postgres
psql (9.6.2)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=#
```

View File

@ -0,0 +1,31 @@
---
layout: default_docs
title: Using the Statement or PreparedStatement Interface
header: Chapter 5. Issuing a Query and Processing the Result
resource: media
previoustitle: Chapter 5. Issuing a Query and Processing the Result
previous: query.html
nexttitle: Using the ResultSet Interface
next: resultset.html
---
The following must be considered when using the `Statement` or `PreparedStatement`
interface:
* You can use a single `Statement` instance as many times as you want. You could
create one as soon as you open the connection and use it for the connection's
lifetime. But you have to remember that only one `ResultSet` can exist
per `Statement` or `PreparedStatement` at a given time.
* If you need to perform a query while processing a `ResultSet`, you can simply
create and use another `Statement`.
* If you are using threads, and several are using the database, you must use a
separate `Statement` for each thread. Refer to [Chapter 10, *Using the Driver in a Multithreaded or a Servlet Environment*](thread.html)
if you are thinking of using threads, as it covers some important points.
* When you are done using the `Statement` or `PreparedStatement` you should close
it.
* In JDBC, the question mark (`?`) is the placeholder for the positional parameters of a `PreparedStatement`.
There are, however, a number of PostgreSQL operators that contain a question mark.
To keep such question marks in a SQL statement from being interpreted as positional parameters,
use two question marks (`??`) as escape sequence.
You can also use this escape sequence in a `Statement`, but that is not required.
Specifically only in a `Statement` a single (`?`) can be used as an operator.

View File

@ -0,0 +1,20 @@
---
layout: default_docs
title: Chapter 10. Using the Driver in a Multithreaded or a Servlet Environment
header: Chapter 10. Using the Driver in a Multithreaded or a Servlet Environment
resource: media
previoustitle: Server Prepared Statements
previous: server-prepare.html
nexttitle: Chapter 11. Connection Pools and Data Sources
next: datasource.html
---
The PostgreSQL™ JDBC driver is not thread safe.
The PostgreSQL server is not threaded. Each connection creates a new process on the server;
as such any concurrent requests to the process would have to be serialized.
The driver makes no guarantees that methods on connections are synchronized.
It will be up to the caller to synchronize calls to the driver.
A notable exception is org/postgresql/jdbc/TimestampUtils.java which is threadsafe.

View File

@ -0,0 +1,121 @@
---
layout: default_docs
title: Tomcat setup
header: Chapter 11. Connection Pools and Data Sources
resource: media
previoustitle: Applications DataSource
previous: ds-ds.html
nexttitle: Data Sources and JNDI
next: jndi.html
---
### Note
The postgresql.jar file must be placed in $CATALINA_HOME/common/lib in both
Tomcat 4 and 5.
The absolute easiest way to set this up in either tomcat instance is to use the
admin web application that comes with Tomcat, simply add the datasource to the
context you want to use it in.
Setup for Tomcat 4 place the following inside the &lt;Context&gt; tag inside
conf/server.xml
```xml
<Resource name="jdbc/postgres" scope="Shareable" type="javax.sql.DataSource"/>
<ResourceParams name="jdbc/postgres">
<parameter>
<name>validationQuery</name>
<value>select version();</value>
</parameter>
<parameter>
<name>url</name>
<value>jdbc:postgresql://localhost/davec</value>
</parameter>
<parameter>
<name>password</name>
<value>davec</value>
</parameter>
<parameter>
<name>maxActive</name>
<value>4</value>
</parameter>
<parameter>
<name>maxWait</name>
<value>5000</value>
</parameter>
<parameter>
<name>driverClassName</name>
<value>org.postgresql.Driver</value>
</parameter>
<parameter>
<name>username</name>
<value>davec</value>
</parameter>
<parameter>
<name>maxIdle</name>
<value>2</value>
</parameter>
</ResourceParams>
```
Setup for Tomcat 5, you can use the above method, except that it goes inside the
&lt;DefaultContext&gt; tag inside the &lt;Host&gt; tag. eg. &lt;Host&gt; ... &lt;DefaultContext&gt; ...
Alternatively there is a conf/Catalina/hostname/context.xml file. For example
http://localhost:8080/servlet-example has a directory $CATALINA_HOME/conf/Catalina/localhost/servlet-example.xml file.
Inside this file place the above xml inside the &lt;Context&gt; tag
Then you can use the following code to access the connection.
```java
import javax.naming.*;
import javax.sql.*;
import java.sql.*;
public class DBTest
{
String foo = "Not Connected";
int bar = -1;
public void init()
{
try
{
Context ctx = new InitialContext();
if(ctx == null )
throw new Exception("Boom - No Context");
// /jdbc/postgres is the name of the resource above
DataSource ds = (DataSource)ctx.lookup("java:comp/env/jdbc/postgres");
if (ds != null)
{
Connection conn = ds.getConnection();
if(conn != null)
{
foo = "Got Connection "+conn.toString();
Statement stmt = conn.createStatement();
ResultSet rst = stmt.executeQuery("select id, foo, bar from testdata");
if(rst.next())
{
foo=rst.getString(2);
bar=rst.getInt(3);
}
conn.close();
}
}
}
catch(Exception e)
{
e.printStackTrace();
}
}
public String getFoo() { return foo; }
public int getBar() { return bar;}
}
```

View File

@ -0,0 +1,32 @@
---
layout: default_docs
title: Performing Updates
header: Chapter 5. Issuing a Query and Processing the Result
resource: media
previoustitle: Using the ResultSet Interface
previous: resultset.html
nexttitle: Creating and Modifying Database Objects
next: ddl.html
---
To change data (perform an `INSERT`, `UPDATE`, or `DELETE`) you use the
`executeUpdate()` method. This method is similar to the method `executeQuery()`
used to issue a `SELECT` statement, but it doesn't return a `ResultSet`; instead
it returns the number of rows affected by the `INSERT`, `UPDATE`, or `DELETE`
statement. [Example 5.3, “Deleting Rows in JDBC”](update.html#delete-example)
illustrates the usage.
<a name="delete-example"></a>
**Example 5.3. Deleting Rows in JDBC**
This example will issue a simple `DELETE` statement and print out the number of
rows deleted.
```java
int foovalue = 500;
PreparedStatement st = conn.prepareStatement("DELETE FROM mytable WHERE columnfoo = ?");
st.setInt(1, foovalue);
int rowsDeleted = st.executeUpdate();
System.out.println(rowsDeleted + " rows deleted");
st.close();
```

View File

@ -0,0 +1,34 @@
---
layout: default_docs
title: Chapter 3. Initializing the Driver
header: Chapter 3. Initializing the Driver
resource: media
previoustitle: Creating a Database
previous: your-database.html
nexttitle: Chapter 3. Loading the Driver
next: load.html
---
**Table of Contents**
* [Importing JDBC](use.html#import)
* [Loading the Driver](load.html)
* [Connecting to the Databas](connect.html)
* [Connection Parameters](connect.html#connection-parameters)
This section describes how to load and initialize the JDBC driver in your programs.
<a name="import"></a>
# Importing JDBC
Any source that uses JDBC needs to import the `java.sql` package, using:
```java
import java.sql.*;
```
### Note
You should not import the `org.postgresql` package unless you are not using standard
PostgreSQL™ extensions to the JDBC API.

View File

@ -0,0 +1,20 @@
---
layout: default_docs
title: Creating a Database
header: Chapter 2. Setting up the JDBC Driver
resource: media
previoustitle: Preparing the Database Server for JDBC
previous: prepare.html
nexttitle: Chapter 3. Initializing the Driver
next: use.html
---
When creating a database to be accessed via JDBC it is important to select an
appropriate encoding for your data. Many other client interfaces do not care
what data you send back and forth, and will allow you to do inappropriate things,
but Java makes sure that your data is correctly encoded. Do not use a database
that uses the `SQL_ASCII` encoding. This is not a real encoding and you will
have problems the moment you store data in it that does not fit in the seven
bit ASCII character set. If you do not know what your encoding will be or are
otherwise unsure about what you will be storing the `UNICODE` encoding is a
reasonable default to use.