1. Overview

The goal of this document is to provide an easy path to start using PGJDBC-NG for those people looking for a JDBC compliant driver for PostgreSQL. It also aims to provide detailed reference documentation for those wishing to use the advanced features of JDBC integrated with PostgreSQL.

1.1. What is PGJDBC-NG?

PGJDBC-NG aims to be a fully compliant JDBC driver for PostgreSQL, implementing JDBC 4.2 onwards. The major goal of the project is to implement all the advanced features of JDBC also supported in PostgreSQL; including composite types, arrays of primitive and composite types, asynchronous notifications, and more.

1.2. Supported Java Versions

PGJDBC-NG requires Java 8 (or higher) which equates to the JDBC 4.2 specification. Support for Java 9’s JDBC 4.3 is forthcoming and will be backwards compatible.

1.3. Getting Started

1.3.1. Dependencies

To use the JDBC Driver or DataSource implementation you’ll need to add the dependencies to your project or classpath.

Maven Style Dependencies

Adding the driver to a project built with a tool that supports maven style dependencies (e.g. Maven, Gradle, Ivy, etc.) only requires adding the primary dependency to your build file.

Maven

For Maven simply add the dependency to your pom.xml

<dependency>
    <groupId>com.impossibl.pgjdbc-ng</groupId>
    <artifactId>pgjdbc-ng</artifactId>
    <version>0.8.9</version>
</dependency>
Gradle

For Gradle the dependency is added to your build.gradle

dependencies {
    compile "com.impossibl.pgjdbc-ng:pgjdbc-ng:0.8.9"
}
Using these artifacts will pull in the driver implementation and its required dependencies. If you wish to depend on a version of the driver that embeds its dependencies you can use the artifact name pgjdbc-ng-all-0.8.9.jar instead; this is useful to work around dependency conflicts in certain situations but otherwise should not be necessary.
Manual Download

If you are using a project that does not support Maven style dependencies or you wish to add the driver globally to an application server like Glassfish, WildFly or Tomcat you can download the dependency embedded version of the driver and copy it to the location required by your project or usage.

1.3.2. Connecting to PostgreSQL

To connect to a PostgreSQL server with PGJDBC-NG you need to use one of its implementation of JDBC’s java.sql.Driver or javax.sql.DataSource interface. Each method has its own use cases and connection routine.

Using the JDBC Driver

The simplest way to obtain a connection is by using the java.sql.Driver implementation and only requires a properly constructed URL along with a call to java.sql.DriverManager.

String url = "jdbc:pgsql://localhost/test";  (1)
String user = "test";
String password = "test";
Connection connection = DriverManager.getConnection(url, user, password); (2)
1 pgsql requests a PGJDBC-NG connection to the server listening on localhost using the default port 5432 and targeting the test database.
2 Opens a connection to the PostgreSQL server.

The URL provided in the example above is one of the simplest URLs required to make a connection. The full URL format accepts credentials, connection settings, fallback hosts and more. Connection URLs provides details about the extended format of URLs accepted by the driver.

Using the JDBC DataSource

While using the Driver implementation can be the easiest way to connect to a server, using a javax.sql.DataSource implementation can allow using enhanced features such as connection pooling and XA transactions.

Obtaining a connection using one of the DataSource implementations is done by explicitly creating an instance of the DataSource and acquiring connections from it.

PGDataSource ds = new PGDataSource(); (1)
ds.setServerName("localhost"); (2)
ds.setDatabaseName("test"); (3)
ds.setUser("test"); (4)
ds.setPassword("test");

Connection connection = ds.getConnection(); (5)
1 Creates a new instance of the standard DataSource implementation.
2 Sets the target of the connection to localhost
3 Sets the target database to test
4 Set the user & password credentials to connect with.
5 Opens a connection to the PostgreSQL server.

While the example uses the basic implementation of DataSource, there are a couple of implementations to choose from detailed in Available DataSources.

Only the most basic properties on the DataSource implementations are demonstrated. The DataSource provides properties for all of the settings available that control almost every aspect of vended connections. Details of all setting available for connections is detailed in Connection Settings.

Alternatively to setting individual properties as shown above, the DataSource also accepts a connection URL like the one shown in Using the JDBC Driver and detailed in Connection URLs.

2. Connection URLs

Connection details can be specified to both the Driver and DataSource implementations alike via an extended URL format. The format allows specifying all required and optional details including any and all connection settings.

PGJDBC-NG’s extended connection URL format is:

jdbc:pgsql://<user>:<password>@<addresses>/<database>?property1=value1&propertyN=valueN
<user>:<password>

Username & password authentication credentials.

<addresses>

Zero or more addresses that the connection targets.

An address is comprised of a hostname or IP address and an optional TCP port designation.

If zero addresses are specified, the connection defaults to using the hostname localhost and PostgreSQL’s default port of 5432. To provide zero addresses the // is omitted as well as in jdbc:pgsql:<database>.

Multiple addresses can also be specified handled in accordance with Multiple/Fallback Addresses.

Finally, a special setting named unixsocket can be specified to target a unix domain socket address; see Unix Domain Sockets.

<database>

The name of the database the connection targets.

The database name is optional. When it is absent, the connection targets a database named equivalent to the authenticated role name of the connection. To omit the databae name and connect to the user database omit the slash preceding the name as well (e.g. jdbc:pgsql//localhost).
<property1>…​<propertyN>

List of connection settings specified as query properties.

The properties in the query portion of the URL can be used to specify connection settings. The available settings are detailed in Connection Settings.

2.1. Multiple/Fallback Addresses

As referenced in Connection URLs multiple addresses can be specified in a connection URL separated by comas.

URL with 3 addresses host1, host2:5434 and 127.0.0.1
jdbc:pgsql://host1,host2:5434,127.0.0.1/db

Each address can be specified as an IPv4, IPv6, or DNS hostname followed by an optional TCP port. If any address doesn’t specify a port, PostgreSQL’s default port of 5432 is used.

2.1.1. Connection Strategy

When multiple addresses are present, a connection to each address is attempted in the order they were specified until a successful connection is made or the connection fails.

Fallback addresses are only used during an initial connection attempt. In the event of a successful connection and subsequent unexpected disconnection, no attempt is made to re-establish a connection regardless of the presence of fallback addresses.

2.2. Unix Domain Sockets

When Netty native libraries are present the driver can connect to a PostgreSQL instance via a unix domain socket. You specify this address using a special unixsocket property supplied in the URL connection properties.

jdbc:pgsql:<database>?unixsocket=/tmp

The property accepts a directory containing a PostgreSQL unix socket (as shown above), or a specific socket filename belonging to a PostgreSQL instance.

When specifying a directory for unixsocket the Driver searches the directory for a socket filename matching PostgreSQL’s known format (e.g. .s.PGSQL.5432). This is a convenience to allow easy connection in the most common case of a single running PostgreSQL instance.

If multiple socket filenames matching PostgreSQL’s format are found the Driver will log a warning and connect to the first matching socket filename it finds. Specifying an absolute path to a socket file will stop logging the warning.

Only a single unixsocket property can be specified, although it can be combined with Multiple/Fallback Addresses. When used in a fallback configuration the unix socket address is always attempted first.

2.2.1. Netty Native Libraries

Support for Unix Domain Sockets requires the presence of Netty’s native libraries. Visit Netty’s wiki for more information on how to acquire the correct native libraries for your platform.

3. Available Drivers

PGJDBC-NG provides a single implementation of the JDBC java.sql.Driver interface.

import com.impossibl.postgres.jdbc.PGDriver

4. Available DataSources

Multiple DataSource implementations are provided in accordance with the JDBC specification. Each implementation provides specific capabilities and is intended to be used for a specific purpose.

4.1. Standard DataSource

import com.impossibl.postgres.jdbc.PGDataSource

Basic DataSource implementation that provides similar functionality to that of connections vended by the Driver implementation.

This DataSource can be used as a convenience or when a DataSource is required but neither connection pooling or XA capabilities are also required.

4.2. Connection Pool DataSource

import com.impossibl.postgres.jdbc.PGConnectionPoolDataSource

The pooling DataSource provides connections that can be reused. Reuse avoids costly connection time and increases performance.

Although PGConnectionPoolDataSource is a convenient and compliant implementation of JDBC’s javax.sql.ConnectionPoolDataSource, using a library specifically built to support JDBC connection pooling, in conjunction with PGDataSource, will most likely yield better performance.

4.3. XA DataSource

import com.impossibl.postgres.jdbc.xa.PGXADataSource

DataSource with XA support for prepared transactions and two phase commit.

Full usage of the XA capabilities provided by this DataSource require the presence of a compliant JTA transaction manager.

5. Authentication Methods

5.1. PASSWORD

PGJDBC-NG supports the following password authentication methods offered by PostgreSQL.

  • PASSWORD

  • MD5

  • SCRAM-SHA-256 / SCRAM-SHA-256-PLUS

SCRAM-SHA-256-PLUS is supported and automatically preferred over SCRAM-SHA-256 when offered by the server.

Information about PostgreSQL authentication methods can be found here.

6. SSL Connections

6.1. Mode

PGJDBC-NG provides a number of SSL modes controlling how and when SSL is attempted and/or required during connection to a PostgreSQL instance.

Table 1. Available SSL connection modes
Mode SSL Guarantees Description

disable

Never

None

SSL is not attempted and connection fails if the server requires SSL.

allow

Maybe

Privacy

If the server requires SSL this mode allows it, although PGJDBC-NG makes no attempt to enable it otherwise.
Upon connection, SSL certificate verification is never performed.

prefer

Maybe

Privacy

PGJDBC-NG attempts connection using SSL but does not require it and connection will not fail it the server does not support SSL.
Upon connection, SSL certificate verification is never performed.

require

Yes

Privacy

PGJDBC-NG attempts connection using SSL and fails if the server does not support SSL.
Upon connection, SSL certificate verification is never performed.

verify-ca

Yes

Privacy & Trust

PGJDBC-NG attempts connection using SSL and fails if the server does not support SSL.
Upon connection the server’s SSL certificate is validated using a provided CA certificate.

verify-full

Yes

Privacy, Trust & Verification

PGJDBC-NG attempts connection using SSL and fails if the server does not support SSL.
Upon connection the server’s SSL certificate is validated using a provided CA certificate & the hostname is required to match one available in the server’s SSL certificate, ensuring it is the server we expected.

verify-full is the preferred mode as it provides the most stringent requirements and is the mode most associated with how SSL is designed to work.

Enabling one of the SSL modes above is as easy as providing it as the ssl.mode connection setting. In addition there are a number of companion settings that provide required information depending on the chosen mode that may need to be provided.

Table 2. SSL Modes & Related Settings
Mode Server Settings Client Settings

disable

None

None

allow

None

prefer

None

require

None

verify-ca

verify-full

6.2. Client Certificate

As shown above, in all modes except disable, when attempting an SSL connection a client certificate and private key are searched for using the ssl.homedir, ssl.key.file, ssl.key.password & ssl.certificate.file settings. If either of the items cannot be located, SSL will continue to be attempted without providing the server with a client certificate.

The server can be configured to require an SSL client certificate and connection will fail when connection is attempted and the items are not provided to the server.

7. Connection Settings

PGJDBC-NG provides a number of settings that control almost every aspect of the connection.

7.1. JDBC Settings

Setting Property Default

Read Only

Connect in read-only mode

Driver

read-only

false

DataSource
getReadOnly()
setReadOnly(Boolean)
System

pgjdbc.read-only

Parsed SQL Cache Size

Size of the parsed SQL text cache.

A value of zero disables the cache.

Driver

parsed-sql.cache.size

250

DataSource
getParsedSqlCacheSize()
setParsedSqlCacheSize(Integer)

Must be greater than or equal to 0

System

pgjdbc.parsed-sql.cache.size

Prepared Statement Cache Size

Size of the prepared statement cache

A value of zero disables the cache.

Driver

prepared-statement.cache.size

50

DataSource
getPreparedStatementCacheSize()
setPreparedStatementCacheSize(Integer)

Must be greater than or equal to 0

System

pgjdbc.prepared-statement.cache.size

Prepared Statement Cache Threshold

# of times a query is seen before it is cached as a prepared statement.

A value of zero prepares all statements in advance.

Driver

prepared-statement.cache.threshold

0

DataSource
getPreparedStatementCacheThreshold()
setPreparedStatementCacheThreshold(Integer)

Must be greater than or equal to 0

System

pgjdbc.prepared-statement.cache.threshold

Description Cache Size

Size of the query description cache.

A value of zero disables the cache.

Driver

description.cache.size

250

DataSource
getDescriptionCacheSize()
setDescriptionCacheSize(Integer)

Must be greater than or equal to 0

System

pgjdbc.description.cache.size

Network Timeout

Default timeout for network communication.

Value can be changed at runtime through API.

Value of zero disables the timeout.

Driver

network.timeout

0

DataSource
getNetworkTimeout()
setNetworkTimeout(Integer)

Must be greater than or equal to 0

System

pgjdbc.network.timeout

Strict Mode

Enable or disable strict adherence to JDBC specification.

Affected behavior when disabled:

  • ResultSetMetaData.getColumnName(int) returns a value equivalent to ResultSetMetaData.getColumnLabel(int), if a label is available.

  • The Statement.executeBatch(…​) family of methods insert an extraneous EXECUTE_FAILED status into BatchUpdateException.getUpdateCounts() even though PostgreSQL stops executing at the first error.

Driver

strict-mode

true

DataSource
getStrictMode()
setStrictMode(Boolean)
System

pgjdbc.strict-mode

Fetch Size

Default fetch size of query results.

Value can be changed at runtime.

A value of zero disables batching results.

Driver

fetch.size

None

DataSource
getFetchSize()
setFetchSize(Integer)

Must be greater than or equal to 0

System

pgjdbc.fetch.size

Housekeeper

Enables or disables the housekeeping system for leaked JDBC objects.

Driver

housekeeper

true

DataSource
getHousekeeper()
setHousekeeper(Boolean)
System

pgjdbc.housekeeper

Registry Sharing

Enables or disables sharing type registries between connections.

Driver

registry.sharing

true

DataSource
getRegistrySharing()
setRegistrySharing(Boolean)
System

pgjdbc.registry.sharing

API Trace

Enables or disables API trace output.

Driver

api.trace

false

DataSource
getApiTrace()
setApiTrace(Boolean)
System

pgjdbc.api.trace

API Trace File

File destination of API trace output.

api.trace must be true to generate trace output
Driver

api.trace.file

None

DataSource
getApiTraceFile()
setApiTraceFile(String)
System

pgjdbc.api.trace.file

7.2. JDBC DataSource Settings

Setting Property Default

Data Source Name

Name of data source.

DataSource
getDataSourceName()
setDataSourceName(String)

None

System

pgjdbc.data-source.name

Server Name

Host name for TCP connections.

DataSource
getServerName()
setServerName(String)

localhost

System

pgjdbc.server.name

Port Number

Port number for TCP connections.

DataSource
getPortNumber()
setPortNumber(Integer)

5432

System

pgjdbc.port.number

Must be between 1 and 65535

Server Local Name

Unix socket name for local server connections.

DataSource
getServerLocalName()
setServerLocalName(String)

None

System

pgjdbc.server.local-name

Server Addresses

Comma separated list of server addresses for which a connection will be attempted in order.

Supports providing DNS, IPv4, IPv6 & Unix socket addresses. DNS & IP addresses are specified in host[:port]? format while Unix socket addresses must contain a / to valid. Additionally IPv6 host names must be enclosed in [].

Specifying a list of addresses takes precedence over server.name, port.number & `server.local-name' and will cause those settings to be ignored.
DataSource
getServerAddresses()
setServerAddresses(String)

None

System

pgjdbc.server.addresses

Login Timeout

Maximum time to wait for a connection to be established.

DataSource
getLoginTimeout()
setLoginTimeout(Integer)

0

System

pgjdbc.login.timeout

Must be greater than or equal to 0

7.3. System Settings

Setting Property Default

Database URL

URL of database connection

Driver

database.url

None

DataSource
getDatabaseUrl()
setDatabaseUrl(String)
System

pgjdbc.database.url

Database Name

Name of database related to connection

Driver

database.name

Empty

DataSource
getDatabaseName()
setDatabaseName(String)
System

pgjdbc.database.name

Application Name

Name of the client application

Driver

application.name

Driver implementation name

DataSource
getApplicationName()
setApplicationName(String)
System

pgjdbc.application.name

User

Username for server authentication & authorization.

If no value is provided is defaults to the Java system property user.name.

Driver

user

Current user via user.name system property

DataSource
getUser()
setUser(String)
System

pgjdbc.user

Password

Password for server authentication.

Driver

password

Empty

DataSource
getPassword()
setPassword(String)
System

pgjdbc.password

Field Format

Preferred format of result fields.

  • text

  • binary

Driver

field.format

binary

DataSource
getFieldFormat()
setFieldFormat(FieldFormat)
System

pgjdbc.field.format

Field Length Max

Default maximum allowed length of field.

Driver

field.length.max

None

DataSource
getFieldLengthMax()
setFieldLengthMax(Integer)

Must be greater than or equal to 0

System

pgjdbc.field.length.max

Param Format

Preferred format of prepared statement parameters.

  • text

  • binary

Driver

param.format

binary

DataSource
getParamFormat()
setParamFormat(FieldFormat)
System

pgjdbc.param.format

Money Fractional Digits

# of fractional digits for money fields.

Driver

money.fractional-digits

2

DataSource
getMoneyFractionalDigits()
setMoneyFractionalDigits(Integer)

Must be between 0 and 20

System

pgjdbc.money.fractional-digits

SSL Mode

SSL connection mode.

  • disable

  • allow

  • prefer

  • require

  • verify-ca

  • verify-full

Driver

ssl.mode

disable

DataSource
getSslMode()
setSslMode(SSLMode)
System

pgjdbc.ssl.mode

SSL Certificate File

SSL client certificate file name.

Driver

ssl.certificate.file

postgresql.crt

DataSource
getSslCertificateFile()
setSslCertificateFile(String)
System

pgjdbc.ssl.certificate.file

SSL CA Certificate File

SSL certificate authority file name.

Driver

ssl.ca.certificate.file

root.crt

DataSource
getSslCaCertificateFile()
setSslCaCertificateFile(String)
System

pgjdbc.ssl.ca.certificate.file

SSL Key File

SSL key file name.

Driver

ssl.key.file

postgresql.pk8

DataSource
getSslKeyFile()
setSslKeyFile(String)
System

pgjdbc.ssl.key.file

SSL Key Password

SSL key file password.

Driver

ssl.key.password

None

DataSource
getSslKeyPassword()
setSslKeyPassword(String)
System

pgjdbc.ssl.key.password

SSL Key Password Callback

SSL key file password callback class name.

Driver

ssl.key.password.callback

com.impossibl.postgres.protocol.ssl.ConsolePasswordCallbackHandler

DataSource
getSslKeyPasswordCallback()
setSslKeyPasswordCallback(Class)
System

pgjdbc.ssl.key.password.callback

SSL Home Dir

Directory that SSL files are located in.

If the value begins with a path separator (e.g. /) it will be considered an absolute path. In all other cases it is considered a value relative to the user’s home directory. On Windows $APPDATA is used as the home directory, all others use the user.home system property.

Driver

ssl.home-dir

.postgresql

DataSource
getSslHomeDir()
setSslHomeDir(String)
System

pgjdbc.ssl.home-dir

SQL Trace

Enables or disables SQL trace output

Driver

sql.trace

false

DataSource
getSqlTrace()
setSqlTrace(Boolean)
System

pgjdbc.sql.trace

SQL Trace File

File destination of SQL trace output.

sql.trace must be true to generate trace output
Driver

sql.trace.file

None

DataSource
getSqlTraceFile()
setSqlTraceFile(String)
System

pgjdbc.sql.trace.file

7.4. Protocol Settings

Setting Property Default

Protocol Version

Version of server protocol to use.

Valid protocol versions:

  • 3.0

Driver

protocol.version

3.0

DataSource
getProtocolVersion()
setProtocolVersion(Version)
System

pgjdbc.protocol.version

Protocol I/O Mode

I/O subsystem selection mode.

any

Attempts to use each other subsystem in order of native, nio, oio.

native

Native subsystem using kqueue on macOS and epoll on Linux. Native libraries must be provided, see Netty’s wiki.

nio

Java NIO subsystem.

oio

Java Blocking I/O subsystem.

Driver

protocol.io.mode

any

DataSource
getProtocolIoMode()
setProtocolIoMode(ProtocolIOMode)
System

pgjdbc.protocol.io.mode

Protocol I/O Threads

Number of I/O threads in pool

Driver

protocol.io.threads

3

DataSource
getProtocolIoThreads()
setProtocolIoThreads(Integer)

Must be greater than or equal to 1

System

pgjdbc.protocol.io.threads

Protocol Encoding

Text encoding

Driver

protocol.encoding

UTF-8

DataSource
getProtocolEncoding()
setProtocolEncoding(Charset)
System

pgjdbc.protocol.encoding

Protocol Socket Receive Buffer Size

Socket receive buffer size

Driver

protocol.socket.recv-buffer.size

None

DataSource
getProtocolSocketRecvBufferSize()
setProtocolSocketRecvBufferSize(Integer)

Must be greater than or equal to 0

System

pgjdbc.protocol.socket.recv-buffer.size

Protocol Socket Send Buffer Size

Socket send buffer size

Driver

protocol.socket.send-buffer.size

None

DataSource
getProtocolSocketSendBufferSize()
setProtocolSocketSendBufferSize(Integer)

Must be greater than or equal to 0

System

pgjdbc.protocol.socket.send-buffer.size

Protocol Buffer Pooling

Enable or disable pooling of byte buffers

Driver

protocol.buffer.pooling

true

DataSource
getProtocolBufferPooling()
setProtocolBufferPooling(Boolean)
System

pgjdbc.protocol.buffer.pooling

Protocol Message Size Max

Maximum size message that can be received

Driver

protocol.message.size.max

20971520

DataSource
getProtocolMessageSizeMax()
setProtocolMessageSizeMax(Integer)

Must be greater than or equal to 0

System

pgjdbc.protocol.message.size.max

Protocol Trace

Enable or disable message trace output

Driver

protocol.trace

false

DataSource
getProtocolTrace()
setProtocolTrace(Boolean)
System

pgjdbc.protocol.trace

Protocol Trace File

File destination of message trace output

protocol.trace must be true to generate trace output
Driver

protocol.trace.file

None

DataSource
getProtocolTraceFile()
setProtocolTraceFile(String)
System

pgjdbc.protocol.trace.file

8. JDBC 4.2

There are numerous JDBC books, tutorials, blogs, etc. available to learn the basics and even advanced aspects of JDBC. As such, this section will provide details only specific to easy and efficient usage of PGJDBC-NG’s implementation of the JDBC specification.

8.1. Using Specific PostgreSQL Types

JDBC maps many of the stand SQL types to JDBC types and further to Java classes. Prior to JDBC 4.2 this was done using an integer type code (available in java.sql.Types). This mapping proves lacking when it comes to using many of the extended types that PostgreSQL provides.

JDBC 4.2 added a new method for specifying extended types, java.sql.SQLType. Most of the interfaces in JDBC were extended to accept this new, extendable, type in place of the previous integer type code. PGJDBC-NG has embraced this new method of using/specfiying PostgreSQL’s extended types via PGType.

8.1.1. PGType

PGType is a PGJDBC-NG provided implementation of java.sql.SQLType and allows referencing the most common of the specific PostgreSQL extended types using a simple enum.

PGType can be used wherever a java.sql.SQLType is accepted and whenever a specific PostgreSQL type is wanted or required.

Setting a parameter value, specifying PostgreSQL’s money type
PreparedStatement pstmt = connection.prepareStatement("INSERT INTO ledger(balance) VALUES (?)");
pstmt.setObject(1, 113.54, PGType.MONEY); (1)
pstmt.executeUpdate();
1 Set a float with the specific column type of money.

8.1.2. Resolving Custom Types

As stated, PGType only handles the most common of PostgreSQL’s types. Any extended types not supported by PGType must be handled by resolving a type name to a PGAnyType.

PGConnection provides the resolveType method which does the resolution to a PGAnyType and thus provides usable SQLTypes.

Resolving a type to a PGAnyType
PGConnection pgConnection = connection.unwrap(PGConnection.class); (1)
PGAnyType myType = pgConnection.resolveType("my_type"); (2)
1 Acquire a PGConnection from a standard connection (see Accessing via unwrap).
2 Resolve the custom type my_type to a PGAnyType.

8.2. User Defined Types

PostgreSQL supports User Defined Types, what it refers to as composite types, created with the CREATE TYPE <name> AS (<attributes>) DDL. These are supported in JDBC using one of two mechanisms, the standard java.sql.Struct API or via custom mapped Java classes that implement java.sql.SQLData.

8.2.1. java.sql.Struct

PGJDBC-NG fully supports UDTs via the JDBC java.sql.Struct API for specifying UDT parameters and for retrieving UDT values in from ResultSets.

Strong typing on the elements returned from Struct.getAttributes() will only be present if the query was run from a PreparedStatement. A regular Statement uses the text-based PostgreSQL protocol, which will always return the values of the UDT represented as strings.

8.2.2. java.sql.SQLData

Alternatively to using the standard Struct interface to access UDTs, custom Java classes can be created that implement the java.sql.SQLData interface. PGJDBC-NG also fully supports the java.sql.SQLData API.

Creating custom types

The JDBC specification suggests that all classes implementing SQLData be generated by a mechanism of the implementation’s choosing. PGJDBC-NG supports this method using the provided UDT Generator, although UDTs can also be created manually.

UDT Generator

Using the UDT generator requires a running PostgreSQL instance and a schema in place that provides the types that Java classes are to be generated for. Details about executing the UDT generator, either via the command line interface or using it as a Java library, are available in UDT Generator.

Manual

Manual creation of custom Java classes implementing java.sql.SQLData is done fairly easily but also requires manual updates when schema changes.

Given a UDT created with the following DDL:

CREATE TYPE address AS (street text, city text, state char(2), zip text);

A simple implementation of java.sql.SQLData is:

import java.sql.SQLData;

class Address implements SQLData {

  public String street;
  public String city;
  public String state;
  public String zip;

  public String getSQLTypeName() { (1)
    return "address";
  }

  public void readSQL(SQLInput stream, String typeName) throws java.sql.SQLException { (2)
    street = stream.readString();
    city = stream.readString();
    state = stream.readString();
    zip = stream.readString();
  }

  public void writeSQL (SQLOutput stream) throws SQLException { (3)
    stream.writeString(street);
    stream.writeString(city);
    stream.writeString(state);
    stream.writeString(zip);
  }

}
1 Provides the custom type name this Java class maps to; the name can be schema qualified if necessary.
2 readSQL reads the attributes in definition order.
3 writeSQL writes the attributes in definition order.
The implementations of readSQL and writeSQL must always read and write attributes in the order in which they were defined in the DDL statement. After DDL updates like ALTER TYPE this order isn’t always obvious and where using the UDT Generator can help ensure correct implementation.

8.3. Arrays

While there are no special requirements beyond the standard JDBC API and java.sql.Array for accessing the array types provided by PostgreSQL, it is worth noting that PGJDBC-NG fully supports accessing arrays of standard, extended and custom types.

8.4. Dates/Times

Prior to JDBC 4.2, java.sql.Time, java.sql.Date & java.sql.Timestamp were used to represent the associated SQL datatypes in Java. These types have proven almost unwieldy when used in situations spanning timezones and calendars.

JDBC 4.2 introduced supported for the new Java Date and Time API (JSR 310) which are easy to use and provide consistent views of times and dates.

PGJDBC-NG not only fully implements this portion of the JDBC 4.2 specification, all of its Date/Time handling is built on this library. Therefore it is suggested to skip using the JDBC Date/Time types when possible and use the Date/Time API instead. It provides an easier to use API and the data undergoes less transformation to arrive at a Java type.

Some simple examples are provided to detail the basic usage of JDBC with the new Date and Time API classes. Although, to learn the full capabilities and usage of the new integration we suggest reading the relevant portions of the JDBC API or any of the widely available tutorials.

8.4.1. Basic Usage

To retrieve values using the new Date and Time API you must not use the convenience APIs named for their counterpart datatype, and instead use getObject providing the type of object you would like to receive.

Retrieve a LocalDateTime from a TIMESTAMP WITHOUT TIMEZONE
ResultSet rs = statement.getResultSet();
LocalDateTime incorrect = rs.getTimestamp(1).toLocalDateTime(); (1)
LocalDateTime correct = rs.getObjecct(1, LocalDateTime.class); (2)
1 Incorrect method using convenience API and conversion
2 Correct method using getObject and no conversion.

Similarly setting values should provide the new Date and Time API object directly to the driver using setObject and not use the convenience APIs.

Providing LocalDateTime for a TIMESTAMP WITHOUT TIMEZONE
PreparedStatement stmt = conn.prepareStatement("INSERT INTO log VALUES (? , ?)");
stmt.setText("Something happened");
stmt.setTimestamp(1, Timestamp.valueOf(LocalDateTime.now())); (1)
stmt.setObject(1, LocalDateTime.now()); (2)
1 Incorrect method using convenience API and conversion
2 Correct method using setObject and no conversion.

The preceding examples show usage of LocalDateTime but support for all of the new Date and Time API classes is provided matching the applicable SQL datatypes as specified by JDBC.

JDBC 4.2 does not specify support for ZonedDateTime in any fashion and PGJDBC-NG follows that specification. ZonedDateTime must be converted to OffsetDateTime before begin passed to the API.

9. Beyond JDBC

Some of the functionality provided by PostgreSQL cannot be utilized via JDBC. In some cases PGJDBC-NG has extended the JDBC API to provide access to this functionality.

9.1. PGConnection

PGConnection is an interface that extends java.sql.Connection and provides access to extended features and functionality.

In almost all cases to access extended features and functionality you will need a reference to a PGConnection instead of the standard java.sql.Connection.

9.1.1. Accessing via unwrap

Although many times the Driver or DataSource will hand you an actual implementation of PGConnection and simply casting to PGConnection would work, there are numerous cases where casting will not succeed.

Thankfully JDBC provides a method to access native connections interfaces, like PGConnection, through the JDBC API. That mechanism is the unwrap method available on the connection.

Unwrapping instead of casting
Connection connection = DriverManager.getConnection(url);
PGConnection pgConnection = (PGConnection)connection; (1)
PGConnection pgConnection = connection.unwrap(PGConnection.class); (2)
1 Incorrect method assuming PGConnection and casting
2 Correct method using unwrap.

Not only does using unwrap guarantee success (when using PGJDBC-NG) its just as easy as casting.

9.2. Asynchronous Notifications

PostgreSQL provides an asynchronous notification system via its LISTEN & NOTIFY commands.

PGJDBC-NG provides the ability to be asynchronously notified when notifications are raised, including the ability to filter on channel names and payload values.

Simple asynchronous notification example
PGConnection connection = DriverManager.getConnection(url).unwrap(PGConnection.class); (1)
connection.addListener(new PGNotificationListener() { (2)

  void notification(int processId, String channelName, String payload) {
    System.out.println("Received Notification: " + processId + ", " + channelName + ", " + payload); (3)
  }

  void closed() { (4)
    // initiate reconnection & restart listening
  }

});


Statement stmt = connection.createStatement();
stmt.executeUpdate("LISTEN msgs"); (5)

stmt.executeUpdate("NOTIFY msgs"); (6)
1 Acquire PGConnection to access extended functionality
2 Add a notification listener to the connection.
3 Handle incoming notification.
Notifications are delivered on I/O threads. Executing long or blocking operations will adversely affect performance; it should be avoided in all cases.
4 Listeners can be notified when the connection is closed unexpectedly. This allows them to reconnect and restart listening, ensuring notifications will always be received.
5 Issue the LISTEN command to the server, this starts the server delivering notifications to this connection.
6 An example NOTIFY command is issued to and will be received by the handler from <2>
As shown above, adding a listener only prepares PGJDBC-NG to deliver the notifications inside the JVM, until a LISTEN command is issued to the server no notifications will be received.

9.2.1. Filtering

Notification listeners can also be added with a filter on the channel name. This allows individual listeners to handle specific notification channels.

9.3. COPY FROM/TO

PostgreSQL provides the capability to bulk load data from an external file, using COPY <table/spec> FROM STDIN and to bulk save data to an external file, using COPY <table/spec> TO STDOUT.

PGJDBC-NG supports these commands using two different methods, using Standard Input & Output or copyFrom / copyTo.

9.3.1. Standard Input & Output

Issuing a relevant command via a Statement or PreparedStatement will read or write respectively from Java’s System.in or System.out.

This method can be useful when redirecting the standard streams from the console or when redirecting the standard streams via Java’s System.setIn or System.setOut.

Read from redirected System.in using a Statement
System.setIn(new ByteArrayInputStream("1-1\t1-2\n2-1\t2-2\n3-1\t3-2".getBytes(UTF_8))); (1)
try (Statement statement = connection.createStatement()) {
  statement.execute("COPY a_table FROM STDIN"); (2)
}
1 Redirect System.in to a data stream in COPY format
2 Issue the COPY using a standard Statement
Write to redirected System.out using a Statement
System.setOut(new FileOutputStream("table.txt")); (1)
try (Statement statement = connection.createStatement()) {
  statement.execute("COPY a_table TO STDOUT"); (2)
}
1 Redirect System.out to a file stream that will receive table data in COPY format
2 Issue the COPY using a standard Statement
Redirecting System.in or System.out can have unwanted global ramifications. See copyFrom / copyTo for a method that does not require global redirection.

9.3.2. copyFrom / copyTo

PGJDBC-NG’s PGConnection provides methods to issue COPY commands targeting provided input or output streams. Using these methods allows specifying stream without globally redirecting the standard streams like when using Standard Input & Output.

Although, they require access to PGConnection, using copyFrom & copyTo are easy and thread-safe.

Read from a specified stream using copyFrom
InputStream tableData = new ByteArrayInputStream("1-1\t1-2\n2-1\t2-2\n3-1\t3-2".getBytes(UTF_8)); (1)
connection.unwrap(PGConnection.class).copyFrom("COPY a_table FROM STDIN", tableData); (2)
1 Declare a data stream in COPY format
2 Issue the COPY using PGConnection.copyFrom
Write to a specified stream using copyTo
OutputStream fileOut = new FileOutputStream("table.txt")); (1)
connection.unwrap(PGConnection.class).copyTo("COPY a_table TO STDOUT", fileOut); (2)
1 Declare a file stream to receive the table data in COPY format
2 Issue the COPY using PGConnection.copyTo
When using copyFrom the SQL command must only be a valid COPY …​ FROM STDIN command and when using copyTo the SQL command must only be a valid COPY …​ TO STDOUT. Any other commands issued will result in an exception begin thrown.

10. Logging & Tracing

10.1. Logging

PGJDBC-NG uses the Java platform’s java.util.logging API for all of its logging.

10.1.1. Configuration

The logging configuration is done using the standard java.util.logging configuration method using a logging.properties file.

The root logging package is com.impossibl.postgres, configuration of this package will control all log messages generated by PGJDBC-NG.

PGJDBC-NG uses logging specifically for configuration, warnings, errors and intermittent informational messages. For more detailed inspection of the drivers activities and state, Tracing is provided.

10.2. Tracing

PGJDBC-NG provides tracing to diagnose errors and performance problems that may arise while using connections provided by the Driver or one of the DataSources. Tracing is managed separately from logging to allow generation of succinct trace logs that are easy to ready, unlike logging which can be extremely verbose.

There are three distinct levels of tracing provided by PGJDBC-NG to allow inspection of specific activities of the Driver, DataSources and Connections.

10.2.1. JDBC API Level

The JDBC API tracing provides a trace log of all user generated JDBC API calls; it specifically excludes any calls the driver itself may make to public JDBC API. This allows investigation of what, specifically, the driver is being asked to execute.

Each trace log entry includes the name of the API method called, all parameters (along with documentation level names and their provided values), as well as the return value of the method.

API tracing is enabled with the api.trace & api.trace.file settings. See Connection Settings.

API tracing is built using the Spy JDBC Interceptor library provided by the PGJDBC-NG project as well.

10.2.2. SQL Level

SQL tracing provides a trace log of all SQL passed to the server. It is useful in determining what exactly is sent to the server and in what order.

To honor the JDBC specification the PGJDBC-NG sometimes manipulates user provided SQL by prepending/appending text, replacing "escapes" and more. In addition to this manipulation, sometimes SQL is generated and executed on behalf of the user (e.g. COMMIT & ROLLBACK). All of this is manipulation and generating is done before the SQL trace log, therefore it provides a log of exactly what SQL is sent to the server to be executed.

Each trace log entry includes the SQL to be executed by the server as well as what operation the server is being requested, including "preparing" SQL for execution, executing previously prepared SQL and directly executing unprepared SQL.

SQL tracing is enabled with the sql.trace & sql.trace.file settings. See Connection Settings.

10.2.3. Protocol Level

Protocol level tracing provides a WireShark inspired trace log of all PostgreSQL Frontend/Backend protocol messages passed betweeen PGJDBC-NG and the connected server.

This trace log is very compact but provides message IDs, message direction and dispatch information for all communication with the server.

Protocol tracing is a valuable resource for low-level driver debugging but is not much use for those who are not familiar with the underlying PostgreSQL protocol; almost everybody should stick to JDBC & SQL level tracing.

Protocol tracing is enabled with the protocol.trace & protocol.trace.file settings. See Connection Settings.

11. UDT Generator

Supporting SQL User Defined Types in JDBC 4.2 requires creating a POJO that implements the SQLData interface. Although this is fairly straightforward it can be time consuming to keep Java objects up-to-date with a changing SQL schema. This generator provides a simple way to generate UDT classes from an available schema.

The generator will connect to a provided database, lookup information about a provided set of UDTs and generate the matching classes.

11.1. Generation

The generator can generate POJOs for composite types and enum classes for enumeration types as well as mapping to standard Java types for all other supported types.

11.1.1. Referencing/Nested Types

The generator supports generating classes for SQL types that reference/contain other SQL types.

If the referenced type(s) have POJOs or enums being generated during the same execution, the generator will use the generated type value instead of the standard mapping.

If not, composite types will be created as java.sql.Structs and enums will be created as Strings.

11.2. Executing

The generator can be executed multiple ways.

11.2.1. CLI

To execute the generator on the command line you can use the uber jar as an executable JAR or by the standard method of referencing the main class.

java -jar udt-gen-all-0.8.9.jar [options] <type names>
Options

To generate classes from a schema the generator needs access to a running server with the schema available and a list of type names that it should generate classes for.

Generation

Class generation requires a target Java package and list of types to be specified.

Table 3. Class Generation Options
Short Option Long Option Description

-p

--pkg

Target package for generated classes (Required)

-o

--out

Output directory for generated files (defaults to ./out)

After all options on the command line a list of type names to generate must be provided. Each type can be schema qualified and/or quoted as necessary for PostgreSQL’s interpretation. For example:

<exec generator> -o ./generated/udts a_public_type public.b_public_type "MY_SCHEMA"."A_TYPE"
Connection

The generator options for specifying connection properties. The database name is required all other options can be left to their defaults.

Table 4. Connection Options
Short Option Long Option Description

--url

Database connection URL (overrides name, host, port)

-D

--database

Database name

-H

--host

Database server hostname (default: localhost)

-T

--port

Database server port (default: 5432)

-U

--user

Database username (can be used with --url)

-P

--password

Database password (can be used with --url)

11.2.2. Library

To use the generator as a library and execute it programmatically.

import com.impossibl.postgres.tools.UDTGenerator;
import java.sql.Connection;
import java.util.List;
import java.io.File;

class ExecGen {
  public static void executeGenerator(Connection connection, List<String> typeNames) {
    new UDTGenerator(connection, "sql.schema.types", typeNames)
        .generate(new File("out"));
  }
}

11.2.3. Example

Simple VCard data types in SQL
CREATE TYPE title as enum ('mr', 'mrs', 'ms', 'dr');
CREATE TYPE address as (street text, city text, state char(2), zip char(5));
CREATE TYPE v_card as (id int4, name text, title title, addresses address[]);

Generating classes for the custom SQL types above yields the following…​

Matching VCard Java classes
public class Address implements SQLData {
  private static final String TYPE_NAME = "public.address";

  private String street;

  private String city;

  private String state;

  private String zip;

  @Override
  public String getSQLTypeName() throws SQLException {
    return TYPE_NAME;
  }

  public String getStreet() {
    return street;
  }

  public void setStreet(String street) {
    this.street = street;
  }

  public String getCity() {
    return city;
  }

  public void setCity(String city) {
    this.city = city;
  }

  public String getState() {
    return state;
  }

  public void setState(String state) {
    this.state = state;
  }

  public String getZip() {
    return zip;
  }

  public void setZip(String zip) {
    this.zip = zip;
  }

  @Override
  public void readSQL(SQLInput in, String typeName) throws SQLException {
    this.street = in.readString();
    this.city = in.readString();
    this.state = in.readString();
    this.zip = in.readString();
  }

  @Override
  public void writeSQL(SQLOutput out) throws SQLException {
    out.writeString(this.street);
    out.writeString(this.city);
    out.writeString(this.state);
    out.writeString(this.zip);
  }
}

public enum Title {
  MR("mr"),

  MRS("mrs"),

  MS("ms"),

  DR("dr");

  private String label;

  Title(String label) {
    this.label = label;
  }

  public String getLabel() {
    return label;
  }

  public static Title valueOfLabel(String label) {
    for (Title value : values()) {
      if (value.label.equals(label)) return value;
    }
    throw new IllegalArgumentException("Invalid label");}
}


public class VCard implements SQLData {
  private static final String TYPE_NAME = "public.v_card";

  private Integer id;

  private String name;

  private Title title;

  private PGStruct[] addresses;

  @Override
  public String getSQLTypeName() throws SQLException {
    return TYPE_NAME;
  }

  public Integer getId() {
    return id;
  }

  public void setId(Integer id) {
    this.id = id;
  }

  public String getName() {
    return name;
  }

  public void setName(String name) {
    this.name = name;
  }

  public Title getTitle() {
    return title;
  }

  public void setTitle(Title title) {
    this.title = title;
  }

  public PGStruct[] getAddresses() {
    return addresses;
  }

  public void setAddresses(PGStruct[] addresses) {
    this.addresses = addresses;
  }

  @Override
  public void readSQL(SQLInput in, String typeName) throws SQLException {
    this.id = in.readInt();
    this.name = in.readString();
    this.title = Title.valueOfLabel(in.readString());
    this.addresses = in.readObject(PGStruct[].class);
  }

  @Override
  public void writeSQL(SQLOutput out) throws SQLException {
    out.writeInt(this.id);
    out.writeString(this.name);
    out.writeString(this.title.getLabel());
    out.writeObject(this.addresses, null);
  }
}

Objects of these Java classes can be inserted and retrieved from the database easily using the JDBC API.

Address address = new Address();        (1)
address.setStreet("123 Easy Street");
address.setCity("Easy Town");
address.setState("IL");
address.setZip("12345");

VCard vCard = new VCard();              (2)
vCard.setId(5);
vCard.setTitle(Title.MR);
vCard.setName("A Guy");
vCard.setAddresses(new Address[]{address});

PreparedStatement stmt = connection.prepareStatement("INSERT INTO contacts(card) VALUES (?)");
stmt.setObject(1, vCard); (3)

Statement stmt = connection.createStatement();
ResultSet rs = stmt.executeQuery("SELECT card FROM contacts;")
while (rs.next()) {
  VCard vCard = (VCard) rs.getObject(1, VCard.class); (4)

  HashMap<String, Class<?>> customTypes = new HashMap<>(); (5)
  customTypes.put("v_card", VCard.class);
  customTypes.put("address", Address.class);

  VCard vCard = (VCard) rs.getObject(1, customTypes); (6)
}
1 Create an Address object.
2 Create a VCard object.
3 Use the setObject method to provide a VCard directly to the API.
4 Use the getObject method with a specific type request to retrieve a VCard object.
5 Generate a map of SQL type names to Java class implementations, to use with <4>
6 Use the getObject method with a list of type mappings to retrieve any available custom object mapped to its Java counterpart.

12. Spy JDBC Interceptor

Spy uses auto-generated relay & listener classes for each JDBC API interface to provide an easy method to intercept calls to the JDBC API. Currently it provides the JDBC API level tracing functionality for PGJDBC-NG (see JDBC API Level).

Although the library is used as a tracing system for PGJDBC-NG it should be able to be used with any compliant JDBC driver.

12.1. Connection Wrapping

To use the library as a API tracing system you have three options to wrap connections and enable tracing:

12.1.1. Manual

Manually wrapping the a connection returned from a JDBC Driver or DataSource and using the wrapped connection as normal provides an easy way to programmatically decide when to add or disable tracing.

Manually wrapping an opened connection
Connection conn = ConnectionTracer(DriverManager.getConnection(...), FileTracer("api.trace"))

12.1.2. Driver

If you are acquiring connections using the JDBC Driver API (as opposed to the DataSource API) a more declarative method can be used by altering the URL used to connect to your server.

Simply prepending the driver scheme of the JDBC URL with :spy and leaving the URL in its original form will cause the library to automatically wrap all connections.

Use special JDBC URL to enable tracing
Connection conn = DriverManager.getConnection("jdbc:spy:pgjdbc://localhost/db");

12.1.3. Automatic

PGJDBC-NG has built in support for tracing using the library, see JDBC API Level.

12.2. Beyond Tracing

Spy generates a Relay class and matching Listener interface for each interface in the JDBC API. To support detailed tracing an implementation of each Listener interface is generated that produces trace output.

Due to the way its structured the library can be used to do a lot more than tracing by implementing custom Listener interfaces and wrapping connections using one of the methods previously outlined.

13. Release Notes

Detailed release notes for each version are available here.

Appendix A: Application Servers

14. WildFly

WildFly is a Java EE application server that PGJDBC-NG is compatible with for providing access to PostgreSQL instances.

If you wish to deploy from the WildFly CLI see CLI Deployment.
If you wish to deploy from the WildFly Admin Console see Console Deployment.
Alternatively, you can examine the sample configuration files in File Deployment.

These instructions were created using WildFly 15.0.1.Final as a reference. Earlier or later versions of WildFly may use slightly different commands, terminology and/or layouts, although the steps for deployment remain similar.

14.1. CLI Deployment

First, use the following command to add the Driver as a module.

Add the Driver Jar as a module
module add --name=com.impossibl.pgjdbc-ng --resources=pgjdbc-ng-all-0.8.9.jar --dependencies=javax.api,javax.transaction.api

Next, define a named JDBC Driver.

We provide the class names of the Standard and XA DataSource implementations so that they don’t need to be provided each time we create a DataSource.
Add a named JDBC Driver
/subsystem=datasources/jdbc-driver=pgjdbc-ng:add(driver-name=pgjdbc-ng, driver-module-name=com.impossibl.pgjdbc-ng, driver-datasource-class-name=com.impossibl.postgres.jdbc.PGDataSource, driver-xa-datasource-class-name=com.impossibl.postgres.jdbc.xa.PGXADataSource)

Finally, define a Standard or XA DataSource using the named JDBC Driver

Add a standard DataSource
data-source add --name=PostgreSQLDS --jndi-name=java:jboss/PostgreSQLDS --driver-name=pgjdbc-ng --connection-url=jdbc:pgsql://localhost:5432/test --user-name=test --password=test
Through the --connection-properties argument any settings available in Connection Settings can be provided.
Add an XA DataSource
xa-data-source add --name=PostgreSQLXADS --jndi-name=java:jboss/PostgreSQLXADS --driver-name=pgjdbc-ng --xa-datasource-properties=[{databaseUrl=jdbc:pgsql://localhost:5432/test} --user-name=test --pasword=test
For brevity databaseUrl is used here but any settings available in Connection Settings can be provided in the --xa-datasource-properties argument.

14.2. File Deployment

For people experienced in WildFly configuration, samples are provided of the configuration files needed to get started using PGJDBC-NG.

14.2.1. Driver

Download the driver JAR file and add it as a module using the following module definition.

Driver module.xml
<module xmlns="urn:jboss:module:1.3" name="com.impossibl.pgjdbc-ng">
  <resources>
    <resource-root path="pgjdbc-ng-all-0.8.9.jar"/>
  </resources>
  <dependencies>
    <module name="javax.api"/>
    <module name="javax.transaction.api"/>
  </dependencies>
</module>

Add a definition to datasources/jdbc-driver subsystem.

Driver Definition
<driver name="pgjdbc-ng" module="com.impossibl.pgjdbc-ng">
  <datasource-class>com.impossibl.postgres.jdbc.PGXADataSource</datasource-class>
  <xa-datasource-class>com.impossibl.postgres.jdbc.xa.PGXADataSource</xa-datasource-class>
</driver>

14.2.2. DataSource

Add a DataSource starting with an example definition provided.

Both standard and XA DataSource definitions are provided but only one is required based on your use of XA capabilities.
DataSource Definition
<datasource jndi-name="java:jboss/datasources/PostgreSQLDS" pool-name="PostgreSQLDS">
   <connection-url>jdbc:pgsql://localhost:5432/test</connection-url> (1)
   <connection-property name="housekeeper">false</connection-property> (2)
   <connection-property name="parsedSqlCacheSize">50</connection-property> (2)
   <connection-property name="preparedStatementCacheSize">0</connection-property> (2)
   <driver>pgjdbc-ng</driver>
   <security>
     <user-name>user</user-name> (3)
     <password>secret</password> (4)
   </security>
   <validation>
     <validate-on-match>true</validate-on-match>
     <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker"></valid-connection-checker>
     <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter"></exception-sorter>
   </validation>
   <statement>
     <track-statements>nowarn</track-statements>
     <prepared-statement-cache-size>32</prepared-statement-cache-size>
   </statement>
</datasource>
1 A valid URL specifying your PostgreSQL instance. See Connection URLs for details.
2 Suggested properties for PGJDBC-NG with WildFly.
Any valid Connection Settings can be specified here.
3 Authentication username for your PostgreSQL instance.
4 Authentication password for your PostgreSQL instance.
XA DataSource Connector
<xa-datasource jndi-name="java:jboss/datasources/PostgreSQLXADS" pool-name="PostgreSQLXADS">
   <xa-datasource-property name="host">localhost</xa-datasource-property> (1)
   <xa-datasource-property name="port">5432</xa-datasource-property> (2)
   <xa-datasource-property name="database">test</xa-datasource-property> (3)
   <xa-datasource-property name="housekeeper">false</xa-datasource-property> (4)
   <xa-datasource-property name="parsedSqlCacheSize">50</xa-datasource-property> (4)
   <xa-datasource-property name="preparedStatementCacheSize">0</xa-datasource-property> (4)
   <driver>pgjdbc-ng</driver>
   <security>
     <user-name>user</user-name> (5)
     <password>secret</password> (6)
   </security>
   <validation>
     <validate-on-match>true</validate-on-match>
     <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker"></valid-connection-checker>
     <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter"></exception-sorter>
   </validation>
   <statement>
     <track-statements>nowarn</track-statements>
     <prepared-statement-cache-size>32</prepared-statement-cache-size>
   </statement>
</xa-datasource>
1 Hostname specifying your PostgreSQL instance.
2 Port number specifying your PostgreSQL instance.
3 Target Database for connections.
4 Suggested properties for PGJDBC-NG with WildFly.
Any valid Connection Settings can be specified here.
5 Authentication username for your PostgreSQL instance.
6 Authentication password for your PostgreSQL instance.

14.3. Console Deployment

To following is a walk-through to use the Standard PGJDBC-NG DataSource in WildFly using the Admin Console.

14.3.1. Driver

In the WildFly Admin Console, navigate to the "Deployments" tab and choose "Upload Deployment".

Add the pgjdbc-ng-all-0.8.9.jar file and choose Next.

On the following step, ensure the "Enabled" option is selected and choose Finish

14.3.2. DataSource

In the WildFly Admin Console, visit the "Datasources & Drivers" section in the "Subsystems" category of the "Configuration" tab.

If Driver Deployment was completed correctly, pgjdbc-ng-all-0.8.9.jar should be visible under the "Drivers" subsection.

Choose the "Datasources" subsection and choose "Add DataSource". Use the following steps to complete its configuration.

Step (1/5): Template

Select the "Custom" option and choose Next.

Step (2/5): Attributes

Provide the required attributes described below, once complete choose Next.

Name

Give your DataSource a name such as PostgreSQL. This name can be anything but should be kept short and descriptive.

JNDI Name

WildFly requires JNDI Name to start with java:jboss, give it a name such as java:jboss/datasources/PostgreSQLDS.

After the required prefix, java:jboss, the name can be any allowable freeform text; it is advisable to keep to a name that clearly marks it as a DataSource.

Step (3/5): JDBC Driver

You will need to select the deployed PGJDBC-NG driver. If Driver Deployment was completed correctly, you will see the Jar file name in the list of detected drivers.

Select pgjdbc-ng-all-0.8.9.jar from the dropdown list, leaving the other non-required fields blank, and choose Next.

If you do not see the driver in the list of detected drivers ensure you enabled it during deployment as detailed in Driver Deployment.
Step (4/5): Connection

Provide the connection settings described below, leaving the other non-required fields blank, once complete choose Next.

URL

Provide a valid PGJDBC-NG connection url as in jdbc:pgsql://<server-name>:<port>/<databasename> (see Connection URLs for complete details).

Username

Connection authentication username.

Password

Connection authentication password.

Step (5/5) Testing

The final step is to test the DataSource’s ability to create connections.

Choose Test Connection to test and validate the provided configuration. After the configuration passes testing, choose Next and finally choose Finish.

If the test fails, ensure your PostgreSQL instance is running and accessible, that the values provided in the preceeding steps are correct, and that you can make connections via PostgreSQL’s pgsql using the same values.

14.4. Troubleshooting

The best advice is to view the logs in the WildFly installation folder. All exceptions will be logged here and should provide information as to the issue with your installation and/or configurations.