merge 930 new feature

This commit is contained in:
justbk
2021-09-17 03:59:10 +00:00
committed by justbk2015
50 changed files with 3753 additions and 1248 deletions

246
README_cn.md Normal file
View File

@ -0,0 +1,246 @@
![openGauss Logo](https://opengauss.org/img/brand/view/logo2.jpg)
## 什么是openGauss-connector-JDBC
openGauss是一款开源的关系型数据库管理系统,它具有多核高性能、全链路安全性、智能运维等企业级特性。
openGauss内核早期源自开源数据库PostgreSQL,融合了华为在数据库领域多年的内核经验,在架构、事务、存储引擎、优化器及ARM架构上进行了适配与优化。作为一个开源数据库,期望与广泛的开发者共同构建一个多元化技术的开源数据库社区。
Java数据库连接,(Java Database Connectivity,简称**JDBC**)是Java语言中用来规范客户端程序如何来访问数据库的应用程序接口,提供了诸如查询和更新数据库中数据的方法。openGauss-connector-JDBC就是提供给用户通过Java语言访问数据库的应用程序接口。用户可以使用openGauss官网提供的jar包([参考直接获取部分](#安装)),也可以自行构建jar包([参考从源码构建部分](#从源码构建))以使用JDBC操作数据库。
## 直接获取
在使用openGauss JDBC 驱动之前,请确保您的服务器已经可以正常运行 openGauss 数据库(参考openGauss[快速入门](https://opengauss.org/zh/docs/latest/docs/Quickstart/Quickstart.html))。
### 从maven中央仓库获取
Java开发者可从maven中央仓库中直接获取jar包,坐标如下:
```
<groupId>org.opengauss</groupId>
<artifactId>opengauss-jdbc</artifactId>
```
### 从社区官网下载安装包
1. 在官网下载安装包。
点击[链接](https://opengauss.org/zh/download.html),在openGauss Connectors部分下,根据您部署数据库的服务器的对应系统选择JDBC_${version}的下载按钮。${version}即您需要的版本号。
2. 解压压缩包。
```
tar -zxvf openGauss-${version}-JDBC.tar.gz
```
3. 解压后可以看到同级目录下出现了两个jar包,分别是opengauss-jdbc-${version}.jar和postgresql.jar。opengauss-jdbc-${version}.jar是可以与PG-JDBC共存的包, 包名自2.0.1之后的版本全部从org.postgresql变更为org.opengauss,并且驱动名称从jdbc:postgresql://替换为jdbc:opengauss://。目前从maven中央仓库中获取的也是这个包。
## 从源码构建
### 概述
openGauss JDBC 驱动目前提供3种构建方式。一是通过一键式脚本build.sh进行构建。二是通过脚本进行逐步构建。三是通过mvn命令进行构建。
### 操作系统和软件依赖要求
openGauss JDBC 驱动的生成支持以下操作系统:
- CentOS 7.6(x86架构)
- openEuler-20.03-LTS(aarch64架构)
- Windows
适配其他系统,参照博客[openGauss数据库编译指导](https://opengauss.org/zh/blogs/blogs.html?post/xingchen/opengauss_compile/)
以下表格列举了编译openGauss的软件要求。
建议使用从列出的操作系统安装盘或安装源中获取的以下依赖软件的默认安装包进行安装。如果不存在以下软件,请参考推荐的软件版本。
软件及环境依赖要求如下:
| 软件及环境要求 | 推荐版本 |
| ------------------- | ---------- |
| maven | 3.6.1 |
| java | 1.8 |
| Git Bash (Windows) | 无推荐版本 |
| zip/unzip (Windows) | 无推荐版本 |
### 下载openGauss-connector-jdbc源码
可以从开源社区下载openGauss-connector-jdbc源码。
```
git clone https://gitee.com/opengauss/openGauss-connector-jdbc.git
```
现在我们已经拥有完整的openGauss-connector-jdbc代码,把它存储在以下目录中(以sda为例)。
- /sda/openGauss-connector-jdbc
### 编译第三方软件(可跳过)
在构建openGauss-connector-jdbc之前,需要先编译openGauss依赖的开源及第三方软件。我们已经在openGauss-connector-jdbc目录下的open_source就提供了编译好的开源及第三方软件,直接使用我们提供的open_source可以跳过该部分。这些开源及第三方软件存储在openGauss-third_party代码仓库中,通常只需要构建一次。如果开源软件有更新,需要重新构建软件。
用户也可以直接从**binarylibs**库中获取开源软件编译和构建的输出文件。
如果你想自己编译第三方软件,请到openGauss-third_party仓库查看详情。
执行完上述脚本后,最终编译和构建的结果保存在与**openGauss-third_party**同级的**binarylibs**目录下。在编译**openGauss-connector-jdbc**时会用到这些文件。
### jar包生成
#### 使用一键式脚本生成jar包(Linux)
openGauss-connector-jdbc中的build.sh是编译过程中的重要脚本工具。该工具可快速进行代码编译和打包。
参数说明请见以下表格。
| 选项 | 缺省值 | 参数 | 说明 |
| :--- | :--------------------------- | :---------------- | :----------------------------------------------------------- |
| -3rd | ${Code directory}/binarylibs | [binarylibs path] | 指定binarylibs路径。建议将该路径指定为open_source/或者/sda/openGauss-connector-jdbc/open_source/。如果您有自己编译好openGauss依赖的的第三方库,也可以指定为编译好的三方库路径,如/sda/binarylibs。 |
> **注意**
>
> - **-3rd [binarylibs path]**为**binarylibs**的路径。默认设置为当前代码文件夹下存在**binarylibs**,因此如果**binarylibs**被移至**openGauss-server**中,或者在**openGauss-server**中创建了到**binarylibs**的软链接,则不需要指定此参数。但请注意,这样做的话,该文件很容易被**git clean**命令删除。
现在你已经知晓build.sh的用法,只需使用如下格式的命令即可编译openGauss-connector-jdbc。
1. 执行如下命令进入到代码目录:
```
[user@linux sda]$ cd /sda/openGauss-connector-jdbc
```
2. 执行如下命令使用build.sh进行打包:
```
[user@linux openGauss-connector-jdbc]$ sh build.sh -3rd open_source/
```
结束后会显示如下内容,表示打包成功:
```
Successfully make postgresql.jar
opengauss-jdbc-${version}.jar
postgresql.jar
Successfully make jdbc jar package
now, all packages has finished!
```
成功编译后会出现两个jar包,分别是opengauss-jdbc-${version}.jar与postgresql.jar。编译后的jar包路径为:**/sda/openGauss-connector-jdbc/output**。
#### 使用一键式脚本生成jar包(Windows)
1. 准备 Java 与 Maven环境, 以及可以在**Git Bash**中使用的 **zip/unzip** 命令。
2. 执行如下命令进入到代码目录:
```
[user@linux openGauss-connector-jdbc]$ cd /sda/openGauss-connector-jdbc
```
3. 运行脚本build_on_windows_git.sh:
```
[user@linux openGauss-connector-jdbc]$ sh build_on_windows_git.sh
```
脚本执行成功后,会显示如下结果:
```
begin run
Successfully make postgresql.jar package in /sda/openGauss-connector-jdbc/output/postgresql.jar
Successfully make opengauss-jdbc jar package in /sda/openGauss-connector-jdbc/output/opengauss-jdbc-${version}.jar
Successfully make jdbc jar package in /sda/openGauss-connector-jdbc/openGauss-${version}-JDBC.tar.gz
```
成功编译后会出现两个jar包,分别是opengauss-jdbc-${version}.jar与postgresql.jar。编译后的jar包路径为:**/sda/openGauss-connector-jdbc/output/**。此外还会出现这两个jar包的压缩包openGauss-${version}-JDBC.tar.gz,压缩包路径为 **/sda/openGauss-connector-jdbc/**。
#### 使用mvn命令生成jar包(Windows 或 Linux)
1. 准备 Java 与 Maven环境, 若在Windows环境下还需准备可以在**Git Bash**中使用的 **zip/unzip** 命令。
2. 执行如下命令进入到代码目录:
```
[user@linux sda]$ cd /sda/openGauss-connector-jdbc
```
3. 使用demo准备脚本:
```
[user@linux openGauss-connector-jdbc]$ sh prepare_demo.sh
```
4. 修改根目录下的pom.xml:
将modules部分的module由jdbc改为pgjdbc, 如下所示。
```
<modules>
<module>pgjdbc</module>
</modules>
```
5. 执行mvn命令:
```
[user@linux openGauss-connector-jdbc]$ mvn clean install -Dmaven.test.skip=true
```
Linux系统下构建成功后会显示如下结果:
```
[INFO] Reactor Summary:
[INFO]
[INFO] openGauss JDBC Driver ............................. SUCCESS [5.344s]
[INFO] PostgreSQL JDBC Driver aggregate .................. SUCCESS [0.004s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.439s
[INFO] Finished at: Tue Aug 31 21:55:01 EDT 2021
[INFO] Final Memory: 44M/1763M
[INFO] ------------------------------------------------------------------------
```
构建成功后会出现两个jar包,分别是opengauss-jdbc-${version}.jar与original-opengauss-jdbc-${version}.jar。jar包路径为/sda/openGauss-connector-jdbc/pgjdbc/target/。
## JDBC的使用
参考[基于JDBC开发](https://opengauss.org/zh/docs/latest/docs/Developerguide/%E5%9F%BA%E4%BA%8EJDBC%E5%BC%80%E5%8F%91.html)。
## 文档
更多安装指南、教程和API请参考[用户文档](https://gitee.com/opengauss/docs)。
## 社区
### 治理
查看openGauss是如何实现开放[治理](https://gitee.com/opengauss/community/blob/master/governance.md)。
### 交流
- WeLink:开发者的交流平台。
- IRC频道:`#opengauss-meeting`(仅用于会议纪要)。
- 邮件列表:https://opengauss.org/zh/community/onlineCommunication.html
## 贡献
欢迎大家来参与贡献。详情请参阅我们的[社区贡献](https://opengauss.org/zh/contribution.html)。
## 发行说明
请参见[发行说明](https://opengauss.org/zh/docs/2.0.0/docs/Releasenotes/Releasenotes.html)。
## 许可证
[MulanPSL-2.0](http://license.coscl.org.cn/MulanPSL2/)

View File

@ -20,6 +20,9 @@ kernel=""
if [ -f "/etc/euleros-release" ] if [ -f "/etc/euleros-release" ]
then then
kernel=$(cat /etc/euleros-release | awk -F ' ' '{print $1}' | tr A-Z a-z) kernel=$(cat /etc/euleros-release | awk -F ' ' '{print $1}' | tr A-Z a-z)
elif [ -f "/etc/kylin-release" ]
then
kernel=$(cat /etc/kylin-release | awk -F ' ' '{print $1}' | tr A-Z a-z)
else else
kernel=$(lsb_release -d | awk -F ' ' '{print $2}'| tr A-Z a-z) kernel=$(lsb_release -d | awk -F ' ' '{print $2}'| tr A-Z a-z)
fi fi
@ -116,6 +119,14 @@ then
plat_form_str=openeuler_"$cpu_bit" plat_form_str=openeuler_"$cpu_bit"
fi fi
##################################################################################
# kylin platform
# the result form like this: kylin_aarch64
##################################################################################
if [ "$kernel"x = "kylin"x ]
then
plat_form_str=kylin_"$cpu_bit"
fi
################################################################################## ##################################################################################
# #

View File

@ -1,3 +1,4 @@
<?xml version="1.0" encoding="utf-8" ?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<groupId>org.opengauss</groupId> <groupId>org.opengauss</groupId>
@ -30,11 +31,6 @@
</properties> </properties>
<dependencies> <dependencies>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.30</version>
</dependency>
<dependency> <dependency>
<groupId>com.huawei</groupId> <groupId>com.huawei</groupId>
<artifactId>demo-0.0.1-SNAPSHOT</artifactId> <artifactId>demo-0.0.1-SNAPSHOT</artifactId>
@ -132,6 +128,12 @@
<testExcludes> <testExcludes>
<exclude>org.postgresql/test/sspi/*.java</exclude> <exclude>org.postgresql/test/sspi/*.java</exclude>
</testExcludes> </testExcludes>
<source>1.8</source>
<target>1.8</target>
<showWarnings>true</showWarnings>
<compilerArgs>
<arg>-Xlint:all</arg>
</compilerArgs>
</configuration> </configuration>
</plugin> </plugin>
</plugins> </plugins>
@ -160,6 +162,12 @@
<exclude>**/PGDataSourceFactoryTest.java</exclude> <exclude>**/PGDataSourceFactoryTest.java</exclude>
<exclude>**/OsgiTestSuite.java</exclude> <exclude>**/OsgiTestSuite.java</exclude>
</testExcludes> </testExcludes>
<source>1.8</source>
<target>1.8</target>
<showWarnings>true</showWarnings>
<compilerArgs>
<arg>-Xlint:all</arg>
</compilerArgs>
</configuration> </configuration>
</plugin> </plugin>
<plugin> <plugin>
@ -263,12 +271,6 @@
<exclude>**</exclude> <exclude>**</exclude>
</excludes> </excludes>
</filter> </filter>
<filter>
<artifact>org.slf4j:jcl-over-slf4j</artifact>
<excludes>
<exclude>**</exclude>
</excludes>
</filter>
<filter> <filter>
<artifact>*:*</artifact> <artifact>*:*</artifact>
<excludes> <excludes>
@ -334,12 +336,25 @@
<configuration> <configuration>
<source>1.8</source> <source>1.8</source>
<target>1.8</target> <target>1.8</target>
<showWarnings>true</showWarnings>
<compilerArgs>
<arg>-Xlint:all</arg>
</compilerArgs>
</configuration> </configuration>
</plugin> </plugin>
</plugins> </plugins>
</pluginManagement> </pluginManagement>
</build> </build>
<distributionManagement>
<repository>
<id>ClouldArtifact-central</id>
<url>http://cmc.centralrepo.rnd.huawei.com/maven/</url>
</repository>
<snapshotRepository>
<id>ClouldArtifact-central</id>
<url>http://cmc.centralrepo.rnd.huawei.com/maven/</url>
</snapshotRepository>
</distributionManagement>
<scm> <scm>
<tag>REL42.2.5</tag> <tag>REL42.2.5</tag>
</scm> </scm>

View File

@ -5,6 +5,7 @@
package org.postgresql; package org.postgresql;
import org.postgresql.clusterchooser.GlobalClusterStatusTracker;
import org.postgresql.hostchooser.MultiHostChooser; import org.postgresql.hostchooser.MultiHostChooser;
import org.postgresql.jdbc.PgConnection; import org.postgresql.jdbc.PgConnection;
import org.postgresql.log.Logger; import org.postgresql.log.Logger;
@ -291,6 +292,13 @@ public class Driver implements java.sql.Driver {
LOGGER = Logger.getLogger("org.postgresql.Driver"); LOGGER = Logger.getLogger("org.postgresql.Driver");
} }
if(PGProperty.PRIORITY_SERVERS.get(props) != null){
if(!GlobalClusterStatusTracker.isVaildPriorityServers(props)){
return null;
}
GlobalClusterStatusTracker.refreshProperties(props);
}
if (MultiHostChooser.isUsingAutoLoadBalance(props)) { if (MultiHostChooser.isUsingAutoLoadBalance(props)) {
if(!MultiHostChooser.isVaildPriorityLoadBalance(props)){ if(!MultiHostChooser.isVaildPriorityLoadBalance(props)){
return null; return null;
@ -525,6 +533,7 @@ public class Driver implements java.sql.Driver {
*/ */
private static Connection makeConnection(String url, Properties props) throws SQLException { private static Connection makeConnection(String url, Properties props) throws SQLException {
PgConnection pgConnection = new PgConnection(hostSpecs(props), user(props), database(props), props, url); PgConnection pgConnection = new PgConnection(hostSpecs(props), user(props), database(props), props, url);
GlobalConnectionTracker.possessConnectionReference(pgConnection.getQueryExecutor(), props);
return pgConnection; return pgConnection;
} }
@ -722,7 +731,9 @@ public class Driver implements java.sql.Driver {
urlProps.setProperty(token.substring(0, l_pos), URLCoder.decode(token.substring(l_pos + 1))); urlProps.setProperty(token.substring(0, l_pos), URLCoder.decode(token.substring(l_pos + 1)));
} }
} }
if(urlProps.getProperty("enable_ce") != null && urlProps.getProperty("enable_ce").equals("1")) {
urlProps.setProperty("CLIENTLOGIC", "1");
}
return urlProps; return urlProps;
}else { }else {
StringBuffer tmp = new StringBuffer(); StringBuffer tmp = new StringBuffer();
@ -861,6 +872,9 @@ public class Driver implements java.sql.Driver {
if(urlProps.getProperty("ssl") == null && urlProps.getProperty("sslmode") == null) { if(urlProps.getProperty("ssl") == null && urlProps.getProperty("sslmode") == null) {
urlProps.setProperty("sslmode", "require"); urlProps.setProperty("sslmode", "require");
} }
if(urlProps.getProperty("enable_ce") != null && urlProps.getProperty("enable_ce").equals("1")) {
urlProps.setProperty("CLIENTLOGIC", "1");
}
return urlProps; return urlProps;
} }
} }

View File

@ -0,0 +1,96 @@
package org.postgresql;
import org.postgresql.core.QueryExecutor;
import org.postgresql.jdbc.PgConnection;
import org.postgresql.log.Log;
import org.postgresql.log.Logger;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
/**
* Keeps track of connection reference and host spec in a global map.
*/
public class GlobalConnectionTracker {
private static final Map<String, HashMap<Integer, QueryExecutor>> connectionManager = new HashMap<>();
private static Log LOGGER = Logger.getLogger(GlobalConnectionTracker.class.getName());
/**
*
* @param props the parsed/defaulted connection properties
* @return
*/
private static boolean isForceTargetServerSlave(Properties props){
return PGProperty.FORCE_TARGET_SERVER_SLAVE.getBoolean(props) &&
("slave".equals(PGProperty.TARGET_SERVER_TYPE.get(props)) || "secondary".equals(PGProperty.TARGET_SERVER_TYPE.get(props)));
}
/**
* Store the actual query executor and connection host spec.
*
* @param props the parsed/defaulted connection properties
* @param queryExecutor
*/
public static void possessConnectionReference(QueryExecutor queryExecutor, Properties props) {
if(!isForceTargetServerSlave(props)) return;
int identityQueryExecute = System.identityHashCode(queryExecutor);
String hostSpec = queryExecutor.getHostSpec().toString();
synchronized (connectionManager) {
HashMap<Integer, QueryExecutor> hostConnection = connectionManager.getOrDefault(hostSpec, null);
if (hostConnection == null) {
hostConnection = new HashMap<>();
}
hostConnection.put(identityQueryExecute, queryExecutor);
connectionManager.put(hostSpec, hostConnection);
}
}
/**
* Clear the reference when the connection is closed.
*
* @param props the parsed/defaulted connection properties
* @param queryExecutor
*/
public static void releaseConnectionReference(QueryExecutor queryExecutor, Properties props) {
if(!isForceTargetServerSlave(props)) return;
String hostSpec = queryExecutor.getHostSpec().toString();
int identityQueryExecute = System.identityHashCode(queryExecutor);
synchronized (connectionManager) {
HashMap<Integer, QueryExecutor> hostConnection = connectionManager.getOrDefault(hostSpec, null);
if (hostConnection != null) {
if (hostConnection.get(identityQueryExecute) != null) {
hostConnection.put(identityQueryExecute, null);
} else {
LOGGER.info("[SWITCHOVER] The identity of the queryExecutor has changed!");
}
} else {
LOGGER.info("[SWITCHOVER] No connection found under this host!");
}
}
}
/**
* Close all existing connections under the specified host.
*
* @param props the parsed/defaulted connection properties
* @param hostSpec ip and port.
*/
public static void closeOldConnection(String hostSpec, Properties props) {
if(!isForceTargetServerSlave(props)) return;
synchronized (connectionManager) {
HashMap<Integer, QueryExecutor> hostConnection = connectionManager.getOrDefault(hostSpec, null);
if (hostConnection != null) {
LOGGER.info("[SWITCHOVER] The hostSpec: " + hostSpec + " status from slave to master, start to close the original connection.");
for (QueryExecutor queryExecutor : hostConnection.values()) {
if (queryExecutor != null && !queryExecutor.isClosed()) {
queryExecutor.setAvailability(false);
}
}
hostConnection.clear();
LOGGER.info("[SWITCHOVER] The hostSpec: " + hostSpec + " status from slave to master, end to close the original connection.");
}
}
}
}

View File

@ -19,6 +19,12 @@ import java.util.Properties;
*/ */
public enum PGProperty { public enum PGProperty {
/**
* If to turn on client encryption feature
*/
PG_CLIENT_LOGIC("enable_ce", null,
"value of 1 is used to turn on the client logic driver feature", false),
/** /**
* Database name to connect to (may be specified directly in the JDBC URL). * Database name to connect to (may be specified directly in the JDBC URL).
*/ */
@ -375,12 +381,26 @@ public enum PGProperty {
TARGET_SERVER_TYPE("targetServerType", "any", "Specifies what kind of server to connect", false, TARGET_SERVER_TYPE("targetServerType", "any", "Specifies what kind of server to connect", false,
"any", "master", "slave", "secondary", "preferSlave", "preferSecondary"), "any", "master", "slave", "secondary", "preferSlave", "preferSecondary"),
/**
* Specify the number of nodes to be connected first
*/
PRIORITY_SERVERS("priorityServers",null,"Specify the number of nodes to be connected first"),
/**
* When using the priority load balancing feature, if use the node_host field of the pgxc_node table
* Load balancing feature will be invalid, So we use EIP uniformly unless data IP must be used.
*/
USING_EIP("usingEip", "true", "Use elastic IP address when loadBalance"),
LOAD_BALANCE_HOSTS("loadBalanceHosts", "false", LOAD_BALANCE_HOSTS("loadBalanceHosts", "false",
"If disabled hosts are connected in the given order. If enabled hosts are chosen randomly from the set of suitable candidates"), "If disabled hosts are connected in the given order. If enabled hosts are chosen randomly from the set of suitable candidates"),
HOST_RECHECK_SECONDS("hostRecheckSeconds", "10", HOST_RECHECK_SECONDS("hostRecheckSeconds", "10",
"Specifies period (seconds) after which the host status is checked again in case it has changed"), "Specifies period (seconds) after which the host status is checked again in case it has changed"),
FORCE_TARGET_SERVER_SLAVE("forceTargetServerSlave", "false",
"whether to close old connections to standby node once it was promoted as primary node."),
/** /**
* <p>Specifies which mode is used to execute queries to database: simple means ('Q' execute, no parse, no bind, text mode only), * <p>Specifies which mode is used to execute queries to database: simple means ('Q' execute, no parse, no bind, text mode only),
* extended means always use bind/execute messages, extendedForPrepared means extended for prepared statements only, * extended means always use bind/execute messages, extendedForPrepared means extended for prepared statements only,

View File

@ -144,7 +144,13 @@ public class QueryCNListUtils {
checkLatestUpdate = cnList.lastUpdated; checkLatestUpdate = cnList.lastUpdated;
} }
ArrayList<HostSpec> cnListRefreshed = new ArrayList<>(); ArrayList<HostSpec> cnListRefreshed = new ArrayList<>();
String query = "select node_host,node_port from pgxc_node where node_type='C' and nodeis_active = true;"; Boolean usingEip = PGProperty.USING_EIP.getBoolean(info);
String query;
if (usingEip) {
query = "select node_host1,node_port1 from pgxc_node where node_type='C' and nodeis_active = true order by node_host1;";
} else {
query = "select node_host,node_port from pgxc_node where node_type='C' and nodeis_active = true order by node_host;";
}
List<byte[][]> results = SetupQueryRunner.runForList(queryExecutor, query, true); List<byte[][]> results = SetupQueryRunner.runForList(queryExecutor, query, true);
for (byte[][] result : results) { for (byte[][] result : results) {
String host = queryExecutor.getEncoding().decode(result[0]); String host = queryExecutor.getEncoding().decode(result[0]);

View File

@ -0,0 +1,11 @@
package org.postgresql.clusterchooser;
/**
* Known state of a cluster.
*/
public enum ClusterStatus {
Unknown,
ConnectFail,
MasterCluster,
SecondaryCluster
}

View File

@ -0,0 +1,227 @@
package org.postgresql.clusterchooser;
import org.postgresql.Driver;
import org.postgresql.PGProperty;
import org.postgresql.QueryCNListUtils;
import org.postgresql.hostchooser.MultiHostChooser;
import org.postgresql.log.Log;
import org.postgresql.log.Logger;
import org.postgresql.util.ClusterSpec;
import org.postgresql.util.HostSpec;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
/**
* Keeps track of HostSpec targets in a global map.
*/
public class GlobalClusterStatusTracker {
private static final Map<String, ClusterSpecStatus> clusterStatusMap = new HashMap<>();
private static final Map<String, Boolean> firstConnectionMap = new ConcurrentHashMap<>();
private static Map<String, String> masterClusterList = new HashMap<>();
private static Log LOGGER = Logger.getLogger(GlobalClusterStatusTracker.class.getName());
/**
* Store the actual observed cluster status.
*
* @param clusterSpec The cluster whose status is known.
* @param clusterStatus Latest known status for the cluster.
*/
public static void reportClusterStatus(ClusterSpec clusterSpec, ClusterStatus clusterStatus) {
String key = keyFromClusterSpec(clusterSpec);
synchronized (clusterStatusMap) {
ClusterSpecStatus clusterSpecStatus = clusterStatusMap.get(key);
if (clusterSpecStatus == null) {
clusterSpecStatus = new ClusterSpecStatus(clusterSpec);
clusterStatusMap.put(key, clusterSpecStatus);
}
clusterSpecStatus.status = clusterStatus;
}
}
/**
* The actual primary cluster in the storage cluster.
*
* @param props
* @param clusterSpec
*/
public static void reportMasterCluster(Properties props, ClusterSpec clusterSpec) {
String urlKey = QueryCNListUtils.keyFromURL(props);
String masterClusterKey = keyFromClusterSpec(clusterSpec);
synchronized (masterClusterList) {
masterClusterList.put(urlKey, masterClusterKey);
}
}
/**
* @param hostSpecs
* @return
*/
public static ClusterStatus getClusterStatus(HostSpec[] hostSpecs) {
HostSpec[] cloneHostSpecs = hostSpecs.clone();
Arrays.sort(cloneHostSpecs);
String clusterKey = Arrays.toString(cloneHostSpecs);
synchronized (clusterStatusMap) {
ClusterSpecStatus clusterSpecStatus = clusterStatusMap.get(clusterKey);
if (clusterSpecStatus != null && clusterSpecStatus.status != null) {
return clusterSpecStatus.status;
}
}
return ClusterStatus.Unknown;
}
/**
* Submit the cluster information in props.
* And try to get the information of the master cluster from the refreshed results.
*
* @param props Connection properties.
*/
public static void refreshProperties(Properties props) {
String key = QueryCNListUtils.keyFromURL(props);
// The value is false only when it is obtained for the first time.
boolean block;
synchronized (firstConnectionMap) {
block = firstConnectionMap.getOrDefault(key, false);
firstConnectionMap.put(key, true);
}
String masterClusterkey = getMasterClusterkey(key, block);
if ("".equals(masterClusterkey)) {
return;
}
LOGGER.info("[PRIORITYSERVERS] Find the main cluster in dual clusters." +
" | DualCluster: " + key +
" | MasterCluster:" + masterClusterkey
);
// The result is updated to props.
props.setProperty("MASTERCLUSTER", masterClusterkey);
}
/**
* If block is set to true, waiting will be prevented if the master cluster cannot be found.
*
* @param key Key in the connection string.
* @param block Concurrency lock.
* @return
*/
public static String getMasterClusterkey(String key, boolean block) {
int intervalWaitHasRefreshedCNList = 10;
int timesWaitHasRefreshedCNList = 200;
for (int i = 0; i <= timesWaitHasRefreshedCNList; i++) {
synchronized (masterClusterList) {
String masterClusterKey = masterClusterList.get(key);
if (masterClusterKey != null && !"".equals(masterClusterKey)) {
return masterClusterKey;
}
}
if (!block) break;
try {
Thread.sleep(intervalWaitHasRefreshedCNList);
} catch (InterruptedException e) {
LOGGER.info("[PRIORITYSERVERS] InterruptedException. This caused by: \"Thread.sleep\", waiting for refreshing master cluster from connection.");
}
}
if (block) {
LOGGER.info("[PRIORITYSERVERS] Blocking time extends 2 seconds need to pay attention.");
}
return "";
}
/**
* Determine whether the configuration to support disaster recovery switching is effective.
* if the disaster tolerance switch function is used, the value of this parameter should be a number,
* and the value should be greater than 0 and less than the number of nodes in the connection string.
* <p>
* Return true if the above conditions are met, otherwise, return false.
*
* @param props Connection properties
* @return
*/
public static boolean isVaildPriorityServers(Properties props) {
String priorityServers = PGProperty.PRIORITY_SERVERS.get(props);
try {
int priorityServersNumber = Integer.parseInt(priorityServers);
int lengthPGPORTURL = props.getProperty("PGPORTURL").split(",").length;
if (lengthPGPORTURL <= priorityServersNumber || priorityServersNumber <= 0) {
LOGGER.warn("When configuring priority servers, The number of priority nodes should be less than the number of nodes on the URL and greater than 0.");
return false;
}
} catch (NumberFormatException e) {
LOGGER.warn("When configuring priority servers, \"priorityServers\" should be number.");
return false;
}
return true;
}
/**
* Split the clusters in the url, get the master cluster first when there are dual clusters.
*
* @param hostSpecs Specification information of all hosts in the cluster.
* @param info the parsed/defaulted connection properties
* @return
*/
public static Iterator<ClusterSpec> getClusterFromHostSpecs(HostSpec[] hostSpecs, Properties info) {
String priorityServers = PGProperty.PRIORITY_SERVERS.get(info);
ClusterSpec[] clusterSpecs;
if (priorityServers != null) {
clusterSpecs = new ClusterSpec[2];
//cluster demarcation index.
Integer index = Integer.valueOf(priorityServers);
//known master cluster.
String masterCluster = info.getProperty("MASTERCLUSTER");
//When load balancing, use the real queried cluster information.
if (MultiHostChooser.isUsingAutoLoadBalance(info) && masterCluster != null) {
clusterSpecs[0] = new ClusterSpec(hostSpecs);
HostSpec[] urlHostSpecs = Driver.getURLHostSpecs(info);
HostSpec[] slaveHostSpecs = Arrays.copyOfRange(urlHostSpecs, index, urlHostSpecs.length);
if(!masterCluster.contains(slaveHostSpecs[0].toString())){
clusterSpecs[1] = new ClusterSpec(slaveHostSpecs);
}else{
clusterSpecs[1] = new ClusterSpec(Arrays.copyOfRange(urlHostSpecs, 0, index));
}
} else {
HostSpec[] masterHostSpecs = Arrays.copyOfRange(hostSpecs, 0, index);
HostSpec[] slaveHostSpecs = Arrays.copyOfRange(hostSpecs, index, hostSpecs.length);
//Prioritize the known master cluster.
if (masterCluster != null && masterCluster.contains(slaveHostSpecs[0].toString())) {
clusterSpecs[0] = new ClusterSpec(slaveHostSpecs);
clusterSpecs[1] = new ClusterSpec(masterHostSpecs);
} else {
clusterSpecs[0] = new ClusterSpec(masterHostSpecs);
clusterSpecs[1] = new ClusterSpec(slaveHostSpecs);
}
}
} else {
clusterSpecs = new ClusterSpec[1];
clusterSpecs[0] = new ClusterSpec(hostSpecs);
}
return new ArrayList<>(Arrays.asList(clusterSpecs)).iterator();
}
/**
* Returns a key representing the cluster based on clusterSpec.
*
* @param clusterSpec Nodes in the cluster.
* @return Key representing the cluster.
*/
public static String keyFromClusterSpec(ClusterSpec clusterSpec) {
HostSpec[] hostSpecs = clusterSpec.getHostSpecs();
Arrays.sort(hostSpecs);
return Arrays.toString(hostSpecs);
}
static class ClusterSpecStatus {
final ClusterSpec cluster;
ClusterStatus status;
ClusterSpecStatus(ClusterSpec cluster) {
this.cluster = cluster;
}
@Override
public String toString() {
return cluster.toString() + '=' + status;
}
}
}

View File

@ -6,6 +6,7 @@
package org.postgresql.core; package org.postgresql.core;
import org.postgresql.PGConnection; import org.postgresql.PGConnection;
import org.postgresql.jdbc.ClientLogic;
import org.postgresql.jdbc.FieldMetadata; import org.postgresql.jdbc.FieldMetadata;
import org.postgresql.jdbc.TimestampUtils; import org.postgresql.jdbc.TimestampUtils;
import org.postgresql.log.Log; import org.postgresql.log.Log;
@ -21,6 +22,13 @@ import java.util.TimerTask;
* Driver-internal connection interface. Application code should not use this interface. * Driver-internal connection interface. Application code should not use this interface.
*/ */
public interface BaseConnection extends PGConnection, Connection { public interface BaseConnection extends PGConnection, Connection {
/**
* Retrives the connection ClientLogic instance.
* If the client logic option is not turned on, returns null
*
*/
ClientLogic getClientLogic();
/** /**
* Cancel the current query executing on this connection. * Cancel the current query executing on this connection.
* *

View File

@ -14,7 +14,7 @@ public class NativeQuery {
private static final String[] BIND_NAMES = new String[128 * 10]; private static final String[] BIND_NAMES = new String[128 * 10];
private static final int[] NO_BINDS = new int[0]; private static final int[] NO_BINDS = new int[0];
public final String nativeSql; public String nativeSql; /* cannot be final because may be changed with the client encryption feature */
public final int[] bindPositions; public final int[] bindPositions;
public final SqlCommand command; public final SqlCommand command;
public final boolean multiStatement; public final boolean multiStatement;

View File

@ -103,6 +103,31 @@ public interface ParameterList {
*/ */
void setBytea(int index, byte[] data, int offset, int length) throws SQLException; void setBytea(int index, byte[] data, int offset, int length) throws SQLException;
/**
* Saves the literal value of the parameter,
* so when using binaryTransfer=true client logic can see the string value and pass it to the back-end
* @param index
* @param value
*/
void saveLiteralValueForClientLogic(int index, String value) throws SQLException;
/**
* Similar to setBytea
* Binds a client logic value with a custom oid
* @param index the 1-based parameter index to bind.
* @param data an array containing the raw data value
* @param offset the offset within <code>data</code> of the start of the parameter data.
* @param length the number of bytes of parameter data within <code>data</code> to use.
* @param customOid custom oid value as recognized by the server
* @throws SQLException on error or if <code>index</code> is out of range
*/
void setClientLogicBytea(int index, byte[] data, int offset, int length, int customOid) throws SQLException;
/**
*
* @return the literal values of the parameter saved for client logic in case of binary mode
*/
String[] getLiteralValues();
/** /**
* Binds a binary bytea value stored as an InputStream. The parameter's type is implicitly set to * Binds a binary bytea value stored as an InputStream. The parameter's type is implicitly set to
* 'bytea'. The stream should remain valid until query execution has completed. * 'bytea'. The stream should remain valid until query execution has completed.

View File

@ -27,6 +27,7 @@ import java.util.Locale;
*/ */
public class Parser { public class Parser {
private static final int[] NO_BINDS = new int[0]; private static final int[] NO_BINDS = new int[0];
private static final char[] chars = new char[]{' ','\n','\r','\t'};
/** /**
* Parses JDBC query into PostgreSQL's native format. Several queries might be given if separated * Parses JDBC query into PostgreSQL's native format. Several queries might be given if separated
@ -296,13 +297,13 @@ public class Parser {
break; break;
case 'b': case 'b':
case 'B': case 'B':
if(i + 4 < aChars.length) if(i + 5 < aChars.length && i - 1 > 0) {
{
if ("E".equalsIgnoreCase(String.valueOf(aChars[i + 1])) if ("E".equalsIgnoreCase(String.valueOf(aChars[i + 1]))
&& "G".equalsIgnoreCase(String.valueOf(aChars[i + 2])) && "G".equalsIgnoreCase(String.valueOf(aChars[i + 2]))
&& "I".equalsIgnoreCase(String.valueOf(aChars[i + 3])) && "I".equalsIgnoreCase(String.valueOf(aChars[i + 3]))
&& "N".equalsIgnoreCase(String.valueOf(aChars[i + 4]))) && "N".equalsIgnoreCase(String.valueOf(aChars[i + 4]))
{ && isSpecialCharacters(aChars[i - 1])
&& isSpecialCharacters(aChars[i + 5])) {
inBeginEnd ++; inBeginEnd ++;
} }
} }
@ -443,6 +444,15 @@ public class Parser {
} }
} }
public static boolean isSpecialCharacters(char aChar){
for (char specialChar : chars) {
if(specialChar == aChar){
return true;
}
}
return false;
}
public static String removeFirstComment(String str){ public static String removeFirstComment(String str){
String queryTemp = str.trim(); String queryTemp = str.trim();
str = str.trim(); str = str.trim();

View File

@ -45,6 +45,12 @@ public interface Query {
*/ */
String getNativeSql(); String getNativeSql();
/**
* Client logic may replace the native query
* @param sql modified native sql
*/
void replaceNativeSqlForClientLogic(String sql);
/** /**
* Returns properties of the query (sql keyword, and some other parsing info). * Returns properties of the query (sql keyword, and some other parsing info).
* @return returns properties of the query (sql keyword, and some other parsing info) or null if not applicable * @return returns properties of the query (sql keyword, and some other parsing info) or null if not applicable

View File

@ -118,6 +118,11 @@ public interface QueryExecutor extends TypeTransferModeRegistry {
*/ */
int QUERY_EXECUTE_AS_SIMPLE = 1024; int QUERY_EXECUTE_AS_SIMPLE = 1024;
/**
* Execute the query and bypass the client logic
* Added to support queries coming from client logic (c++) and avoid a loop back
*/
int QUERY_EXECUTE_BYPASS_CLIENT_LOGIC = 2048;
/** /**
* Execute several Query, passing results to a provided ResultHandler. * Execute several Query, passing results to a provided ResultHandler.
@ -489,4 +494,6 @@ public interface QueryExecutor extends TypeTransferModeRegistry {
String getSocketAddress(); String getSocketAddress();
void setGaussdbVersion(String gaussdbVersion); void setGaussdbVersion(String gaussdbVersion);
void setAvailability(boolean availability);
} }

View File

@ -5,6 +5,7 @@
package org.postgresql.core; package org.postgresql.core;
import org.postgresql.GlobalConnectionTracker;
import org.postgresql.PGNotification; import org.postgresql.PGNotification;
import org.postgresql.PGProperty; import org.postgresql.PGProperty;
import org.postgresql.jdbc.AutoSave; import org.postgresql.jdbc.AutoSave;
@ -43,9 +44,11 @@ public abstract class QueryExecutorBase implements QueryExecutor {
private final PreferQueryMode preferQueryMode; private final PreferQueryMode preferQueryMode;
private AutoSave autoSave; private AutoSave autoSave;
private boolean flushCacheOnDeallocate = true; private boolean flushCacheOnDeallocate = true;
private Properties props;
// default value for server versions that don't report standard_conforming_strings // default value for server versions that don't report standard_conforming_strings
private boolean standardConformingStrings = false; private boolean standardConformingStrings = false;
private boolean valid = true;
private SQLWarning warnings; private SQLWarning warnings;
private final ArrayList<PGNotification> notifications = new ArrayList<PGNotification>(); private final ArrayList<PGNotification> notifications = new ArrayList<PGNotification>();
@ -54,7 +57,7 @@ public abstract class QueryExecutorBase implements QueryExecutor {
private final CachedQueryCreateAction cachedQueryCreateAction; private final CachedQueryCreateAction cachedQueryCreateAction;
protected QueryExecutorBase(PGStream pgStream, String user, protected QueryExecutorBase(PGStream pgStream, String user,
String database, int cancelSignalTimeout, Properties info) throws SQLException { String database, int cancelSignalTimeout, Properties info) throws SQLException {
this.pgStream = pgStream; this.pgStream = pgStream;
this.user = user; this.user = user;
this.database = database; this.database = database;
@ -65,17 +68,18 @@ public abstract class QueryExecutorBase implements QueryExecutor {
this.preferQueryMode = PreferQueryMode.of(preferMode); this.preferQueryMode = PreferQueryMode.of(preferMode);
this.autoSave = AutoSave.of(PGProperty.AUTOSAVE.get(info)); this.autoSave = AutoSave.of(PGProperty.AUTOSAVE.get(info));
this.cachedQueryCreateAction = new CachedQueryCreateAction(this); this.cachedQueryCreateAction = new CachedQueryCreateAction(this);
this.props = info;
statementCache = new LruCache<Object, CachedQuery>( statementCache = new LruCache<Object, CachedQuery>(
Math.max(0, PGProperty.PREPARED_STATEMENT_CACHE_QUERIES.getInt(info)), Math.max(0, PGProperty.PREPARED_STATEMENT_CACHE_QUERIES.getInt(info)),
Math.max(0, PGProperty.PREPARED_STATEMENT_CACHE_SIZE_MIB.getInt(info) * 1024 * 1024), Math.max(0, PGProperty.PREPARED_STATEMENT_CACHE_SIZE_MIB.getInt(info) * 1024 * 1024),
false, false,
cachedQueryCreateAction, cachedQueryCreateAction,
new LruCache.EvictAction<CachedQuery>() { new LruCache.EvictAction<CachedQuery>() {
@Override @Override
public void evict(CachedQuery cachedQuery) throws SQLException { public void evict(CachedQuery cachedQuery) throws SQLException {
cachedQuery.query.close(); cachedQuery.query.close();
} }
}); });
} }
protected abstract void sendCloseMessage() throws IOException; protected abstract void sendCloseMessage() throws IOException;
@ -120,8 +124,8 @@ public abstract class QueryExecutorBase implements QueryExecutor {
try { try {
pgStream.getSocket().close(); pgStream.getSocket().close();
} catch (IOException e) { } catch (IOException e) {
// ignore // ignore
LOGGER.trace("Catch IOException on close:", e); LOGGER.trace("Catch IOException on close:", e);
} }
closed = true; closed = true;
} }
@ -137,6 +141,7 @@ public abstract class QueryExecutorBase implements QueryExecutor {
sendCloseMessage(); sendCloseMessage();
pgStream.flush(); pgStream.flush();
pgStream.close(); pgStream.close();
GlobalConnectionTracker.releaseConnectionReference(this, props);
} catch (IOException ioe) { } catch (IOException ioe) {
LOGGER.trace("Discarding IOException on close:", ioe); LOGGER.trace("Discarding IOException on close:", ioe);
} }
@ -146,7 +151,12 @@ public abstract class QueryExecutorBase implements QueryExecutor {
@Override @Override
public boolean isClosed() { public boolean isClosed() {
return closed; return !valid || closed;
}
@Override
public void setAvailability(boolean availability){
valid = availability;
} }
@Override @Override
@ -164,7 +174,7 @@ public abstract class QueryExecutorBase implements QueryExecutor {
} }
cancelStream = cancelStream =
new PGStream(pgStream.getSocketFactory(), pgStream.getHostSpec(), cancelSignalTimeout); new PGStream(pgStream.getSocketFactory(), pgStream.getHostSpec(), cancelSignalTimeout);
if (cancelSignalTimeout > 0) { if (cancelSignalTimeout > 0) {
cancelStream.getSocket().setSoTimeout(cancelSignalTimeout); cancelStream.getSocket().setSoTimeout(cancelSignalTimeout);
} }
@ -184,8 +194,8 @@ public abstract class QueryExecutorBase implements QueryExecutor {
try { try {
cancelStream.close(); cancelStream.close();
} catch (IOException e) { } catch (IOException e) {
// Ignored. // Ignored.
LOGGER.trace("Catch IOException on close:", e); LOGGER.trace("Catch IOException on close:", e);
} }
} }
} }
@ -283,7 +293,7 @@ public abstract class QueryExecutorBase implements QueryExecutor {
@Override @Override
public final CachedQuery borrowReturningQuery(String sql, String[] columnNames) throws SQLException { public final CachedQuery borrowReturningQuery(String sql, String[] columnNames) throws SQLException {
return statementCache.borrow(new QueryWithReturningColumnsKey(sql, true, true, return statementCache.borrow(new QueryWithReturningColumnsKey(sql, true, true,
columnNames columnNames
)); ));
} }
@ -299,7 +309,7 @@ public abstract class QueryExecutorBase implements QueryExecutor {
@Override @Override
public final Object createQueryKey(String sql, boolean escapeProcessing, public final Object createQueryKey(String sql, boolean escapeProcessing,
boolean isParameterized, String... columnNames) { boolean isParameterized, String... columnNames) {
Object key; Object key;
if (columnNames == null || columnNames.length != 0) { if (columnNames == null || columnNames.length != 0) {
// Null means "return whatever sensible columns are" (e.g. primary key, or serial, or something like that) // Null means "return whatever sensible columns are" (e.g. primary key, or serial, or something like that)
@ -320,8 +330,8 @@ public abstract class QueryExecutorBase implements QueryExecutor {
@Override @Override
public final CachedQuery createQuery(String sql, boolean escapeProcessing, public final CachedQuery createQuery(String sql, boolean escapeProcessing,
boolean isParameterized, String... columnNames) boolean isParameterized, String... columnNames)
throws SQLException { throws SQLException {
Object key = createQueryKey(sql, escapeProcessing, isParameterized, columnNames); Object key = createQueryKey(sql, escapeProcessing, isParameterized, columnNames);
// Note: cache is not reused here for two reasons: // Note: cache is not reused here for two reasons:
// 1) Simplify initial implementation for simple statements // 1) Simplify initial implementation for simple statements
@ -372,7 +382,7 @@ public abstract class QueryExecutorBase implements QueryExecutor {
// "cached plan must not change result type" // "cached plan must not change result type"
String routine = pe.getServerErrorMessage().getRoutine(); String routine = pe.getServerErrorMessage().getRoutine();
return "RevalidateCachedQuery".equals(routine) // 9.2+ return "RevalidateCachedQuery".equals(routine) // 9.2+
|| "RevalidateCachedPlan".equals(routine); // <= 9.1 || "RevalidateCachedPlan".equals(routine); // <= 9.1
} }
@Override @Override

View File

@ -48,7 +48,7 @@ public interface ResultHandler {
* @param insertOID for a single-row INSERT query, the OID of the newly inserted row; 0 if not * @param insertOID for a single-row INSERT query, the OID of the newly inserted row; 0 if not
* available. * available.
*/ */
void handleCommandStatus(String status, int updateCount, long insertOID); void handleCommandStatus(String status, long updateCount, long insertOID);
/** /**
* Called when a warning is emitted. * Called when a warning is emitted.

View File

@ -30,7 +30,7 @@ public class ResultHandlerBase implements ResultHandler {
} }
@Override @Override
public void handleCommandStatus(String status, int updateCount, long insertOID) { public void handleCommandStatus(String status, long updateCount, long insertOID) {
} }
@Override @Override

View File

@ -31,7 +31,7 @@ public class ResultHandlerDelegate implements ResultHandler {
} }
@Override @Override
public void handleCommandStatus(String status, int updateCount, long insertOID) { public void handleCommandStatus(String status, long updateCount, long insertOID) {
if (delegate != null) { if (delegate != null) {
delegate.handleCommandStatus(status, updateCount, insertOID); delegate.handleCommandStatus(status, updateCount, insertOID);
} }

View File

@ -92,6 +92,21 @@ class CompositeParameterList implements V3ParameterList {
subparams[sub].setBinaryParameter(index - offsets[sub], value, oid); subparams[sub].setBinaryParameter(index - offsets[sub], value, oid);
} }
@Override
public String[] getLiteralValues() {
return new String[]{};
}
@Override
public void saveLiteralValueForClientLogic(int index, String value) throws SQLException {
throw new SQLException("saveLiteralValueForClientLogic is not supported");
}
@Override
public void setClientLogicBytea(int index, byte[] data, int offset, int length, int customOid) throws SQLException {
throw new SQLException("setClientLogicBytea is not supported");
}
public void setBytea(int index, byte[] data, int offset, int length) throws SQLException { public void setBytea(int index, byte[] data, int offset, int length) throws SQLException {
int sub = findSubParam(index); int sub = findSubParam(index);
subparams[sub].setBytea(index - offsets[sub], data, offset, length); subparams[sub].setBytea(index - offsets[sub], data, offset, length);

View File

@ -52,6 +52,11 @@ class CompositeQuery implements Query {
return sbuf.toString(); return sbuf.toString();
} }
@Override
public void replaceNativeSqlForClientLogic(String sql) {
// Empty Implementation for Composite Query
}
@Override @Override
public SqlCommand getSqlCommand() { public SqlCommand getSqlCommand() {
return null; return null;

View File

@ -7,6 +7,8 @@
package org.postgresql.core.v3; package org.postgresql.core.v3;
import org.postgresql.PGProperty; import org.postgresql.PGProperty;
import org.postgresql.clusterchooser.ClusterStatus;
import org.postgresql.clusterchooser.GlobalClusterStatusTracker;
import org.postgresql.core.ConnectionFactory; import org.postgresql.core.ConnectionFactory;
import org.postgresql.core.PGStream; import org.postgresql.core.PGStream;
import org.postgresql.core.QueryExecutor; import org.postgresql.core.QueryExecutor;
@ -18,12 +20,7 @@ import org.postgresql.core.Version;
import org.postgresql.hostchooser.*; import org.postgresql.hostchooser.*;
import org.postgresql.jdbc.SslMode; import org.postgresql.jdbc.SslMode;
import org.postgresql.QueryCNListUtils; import org.postgresql.QueryCNListUtils;
import org.postgresql.util.GT; import org.postgresql.util.*;
import org.postgresql.util.HostSpec;
import org.postgresql.util.MD5Digest;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.ServerErrorMessage;
import org.postgresql.log.Logger; import org.postgresql.log.Logger;
import org.postgresql.log.Log; import org.postgresql.log.Log;
@ -159,20 +156,17 @@ public class ConnectionFactoryImpl extends ConnectionFactory {
@Override @Override
public QueryExecutor openConnectionImpl(HostSpec[] hostSpecs, String user, String database, public QueryExecutor openConnectionImpl(HostSpec[] hostSpecs, String user, String database,
Properties info) throws SQLException { Properties info) throws SQLException {
if (info.getProperty("characterEncoding") != null) if (info.getProperty("characterEncoding") != null) {
{ if ("UTF8".equals((info.getProperty("characterEncoding")).toUpperCase(Locale.ENGLISH))
if ("UTF8".equals((info.getProperty("characterEncoding")).toUpperCase(Locale.ENGLISH)) || "GBK".equals((info.getProperty("characterEncoding")).toUpperCase(Locale.ENGLISH))) {
|| "GBK".equals((info.getProperty("characterEncoding")).toUpperCase(Locale.ENGLISH)) ) setClientEncoding(info.getProperty("characterEncoding"));
{ }
setClientEncoding(info.getProperty("characterEncoding")); }
}
}
if (info.getProperty("use_boolean") != null) if (info.getProperty("use_boolean") != null) {
{ setUseBooleang(info.getProperty("use_boolean").toUpperCase(Locale.ENGLISH));
setUseBooleang(info.getProperty("use_boolean").toUpperCase(Locale.ENGLISH)); }
}
SslMode sslMode = SslMode.of(info); SslMode sslMode = SslMode.of(info);
@ -182,170 +176,217 @@ public class ConnectionFactoryImpl extends ConnectionFactory {
targetServerType = HostRequirement.getTargetServerType(targetServerTypeStr); targetServerType = HostRequirement.getTargetServerType(targetServerTypeStr);
} catch (IllegalArgumentException ex) { } catch (IllegalArgumentException ex) {
throw new PSQLException( throw new PSQLException(
GT.tr("Invalid targetServerType value: {0}", targetServerTypeStr), GT.tr("Invalid targetServerType value: {0}", targetServerTypeStr),
PSQLState.CONNECTION_UNABLE_TO_CONNECT); PSQLState.CONNECTION_UNABLE_TO_CONNECT);
} }
SocketFactory socketFactory = SocketFactoryFactory.getSocketFactory(info); SocketFactory socketFactory = SocketFactoryFactory.getSocketFactory(info);
HostChooser hostChooser = Iterator<ClusterSpec> clusterIter = GlobalClusterStatusTracker.getClusterFromHostSpecs(hostSpecs, info);
HostChooserFactory.createHostChooser(hostSpecs, targetServerType, info); Map<HostSpec, HostStatus> knownStates = new HashMap<>();
Iterator<CandidateHost> hostIter = hostChooser.iterator(); Exception exception = new Exception();
Map<HostSpec, HostStatus> knownStates = new HashMap<HostSpec, HostStatus>(); while (clusterIter.hasNext()) {
while (hostIter.hasNext()) { ClusterSpec clusterSpec = clusterIter.next();
CandidateHost candidateHost = hostIter.next(); HostSpec[] currentHostSpecs = clusterSpec.getHostSpecs();
HostSpec hostSpec = candidateHost.hostSpec;
// Note: per-connect-attempt status map is used here instead of GlobalHostStatusTracker HostChooser hostChooser =
// for the case when "no good hosts" match (e.g. all the hosts are known as "connectfail") HostChooserFactory.createHostChooser(currentHostSpecs, targetServerType, info);
// In that case, the system tries to connect to each host in order, thus it should not look into Iterator<CandidateHost> hostIter = hostChooser.iterator();
// GlobalHostStatusTracker boolean isMasterCluster = false;
HostStatus knownStatus = knownStates.get(hostSpec); while (hostIter.hasNext()) {
if (knownStatus != null && !candidateHost.targetServerType.allowConnectingTo(knownStatus)) { CandidateHost candidateHost = hostIter.next();
if (LOGGER.isDebugEnabled()) { HostSpec hostSpec = candidateHost.hostSpec;
LOGGER.debug("Known status of host " + hostSpec + " is " + knownStatus + ", and required status was " + candidateHost.targetServerType + ". Will try next host");
// Note: per-connect-attempt status map is used here instead of GlobalHostStatusTracker
// for the case when "no good hosts" match (e.g. all the hosts are known as "connectfail")
// In that case, the system tries to connect to each host in order, thus it should not look into
// GlobalHostStatusTracker
HostStatus knownStatus = knownStates.get(hostSpec);
if (knownStatus != null && !candidateHost.targetServerType.allowConnectingTo(knownStatus)) {
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Known status of host " + hostSpec + " is " + knownStatus + ", and required status was " + candidateHost.targetServerType + ". Will try next host");
}
continue;
} }
continue;
}
// //
// Establish a connection. // Establish a connection.
// //
connectInfo = UUID.randomUUID().toString(); // this is used to trace the time taken to establish the connection. connectInfo = UUID.randomUUID().toString(); // this is used to trace the time taken to establish the connection.
LOGGER.info("[" + connectInfo + "] " + "Try to connect." + " IP: " + hostSpec.toString()); LOGGER.info("[" + connectInfo + "] " + "Try to connect." + " IP: " + hostSpec.toString());
PGStream newStream = null; PGStream newStream = null;
try {
try { try {
newStream = tryConnect(user, database, info, socketFactory, hostSpec, sslMode);
} catch (SQLException e) {
if (sslMode == SslMode.PREFER
&& PSQLState.INVALID_AUTHORIZATION_SPECIFICATION.getState().equals(e.getSQLState())) {
// Try non-SSL connection to cover case like "non-ssl only db"
// Note: PREFER allows loss of encryption, so no significant harm is made
Throwable ex = null;
try {
newStream =
tryConnect(user, database, info, socketFactory, hostSpec, SslMode.DISABLE);
LOGGER.debug("Downgraded to non-encrypted connection for host " + hostSpec);
} catch (SQLException ee) {
ex = ee;
} catch (IOException ee) {
ex = ee; // Can't use multi-catch in Java 6 :(
}
if (ex != null) {
LOGGER.debug("sslMode==PREFER, however non-SSL connection failed as well", ex);
// non-SSL failed as well, so re-throw original exception
throw e;
}
} else if (sslMode == SslMode.ALLOW
&& PSQLState.INVALID_AUTHORIZATION_SPECIFICATION.getState().equals(e.getSQLState())) {
// Try using SSL
Throwable ex = null;
try {
newStream =
tryConnect(user, database, info, socketFactory, hostSpec, SslMode.REQUIRE);
LOGGER.debug("Upgraded to encrypted connection for host " +
hostSpec);
} catch (SQLException ee) {
ex = ee;
} catch (IOException ee) {
ex = ee; // Can't use multi-catch in Java 6 :(
}
if (ex != null) {
LOGGER.debug("sslMode==ALLOW, however SSL connection failed as well", ex);
// non-SSL failed as well, so re-throw original exception
throw e;
}
} else {
throw e;
}
}
int cancelSignalTimeout = Integer.parseInt(PGProperty.CANCEL_SIGNAL_TIMEOUT.getDefaultValue());
if (PGProperty.CANCEL_SIGNAL_TIMEOUT.getInt(info) <= Integer.MAX_VALUE / 1000) {
cancelSignalTimeout = PGProperty.CANCEL_SIGNAL_TIMEOUT.getInt(info) * 1000;
} else {
LOGGER.debug("integer cancelSignalTimeout is too large, it will occur error after multiply by 1000.");
}
String socketAddress = newStream.getConnectInfo();
LOGGER.info("[" + socketAddress + "] " + "Connection is established. ID: " + connectInfo);
// Do final startup.
QueryExecutor queryExecutor = new QueryExecutorImpl(newStream, user, database,
cancelSignalTimeout, info);
queryExecutor.setProtocolVersion(this.protocolVerion);
// Check Master or Secondary
HostStatus hostStatus = HostStatus.ConnectOK;
if (candidateHost.targetServerType != HostRequirement.any) {
hostStatus = isMaster(queryExecutor) ? HostStatus.Master : HostStatus.Secondary;
}
GlobalHostStatusTracker.reportHostStatus(hostSpec, hostStatus);
knownStates.put(hostSpec, hostStatus);
if (!candidateHost.targetServerType.allowConnectingTo(hostStatus)) {
queryExecutor.close();
continue;
}
runInitialQueries(queryExecutor, info);
if(MultiHostChooser.isUsingAutoLoadBalance(info)){
QueryCNListUtils.runRereshCNListQueryies(queryExecutor,info);
}
String replication = PGProperty.REPLICATION.get(info);
if (replication == null) {
try { try {
String gaussVersion = queryGaussdbVersion(queryExecutor); newStream = tryConnect(user, database, info, socketFactory, hostSpec, sslMode);
queryExecutor.setGaussdbVersion(gaussVersion); } catch (SQLException e) {
} catch(Exception sqlExp) { if (sslMode == SslMode.PREFER
LOGGER.warn("query gaussVersion failed, use default instead"); && PSQLState.INVALID_AUTHORIZATION_SPECIFICATION.getState().equals(e.getSQLState())) {
queryExecutor.setGaussdbVersion(""); // Try non-SSL connection to cover case like "non-ssl only db"
} // Note: PREFER allows loss of encryption, so no significant harm is made
} Throwable ex = null;
try {
newStream =
tryConnect(user, database, info, socketFactory, hostSpec, SslMode.DISABLE);
LOGGER.debug("Downgraded to non-encrypted connection for host " + hostSpec);
} catch (SQLException ee) {
ex = ee;
} catch (IOException ee) {
ex = ee; // Can't use multi-catch in Java 6 :(
}
if (ex != null) {
LOGGER.debug("sslMode==PREFER, however non-SSL connection failed as well", ex);
// non-SSL failed as well, so re-throw original exception
throw e;
}
} else if (sslMode == SslMode.ALLOW
&& PSQLState.INVALID_AUTHORIZATION_SPECIFICATION.getState().equals(e.getSQLState())) {
// Try using SSL
Throwable ex = null;
try {
newStream =
tryConnect(user, database, info, socketFactory, hostSpec, SslMode.REQUIRE);
LOGGER.debug("Upgraded to encrypted connection for host " +
hostSpec);
} catch (SQLException ee) {
ex = ee;
} catch (IOException ee) {
ex = ee; // Can't use multi-catch in Java 6 :(
}
if (ex != null) {
LOGGER.debug("sslMode==ALLOW, however SSL connection failed as well", ex);
// non-SSL failed as well, so re-throw original exception
throw e;
}
LOGGER.info("Connect complete. ID: " + connectInfo); } else {
// And we're done. throw e;
return queryExecutor; }
} catch (ConnectException cex) { }
// Added by Peter Mount <peter@retep.org.uk>
// ConnectException is thrown when the connection cannot be made. int cancelSignalTimeout = Integer.parseInt(PGProperty.CANCEL_SIGNAL_TIMEOUT.getDefaultValue());
// we trap this an return a more meaningful message for the end user if (PGProperty.CANCEL_SIGNAL_TIMEOUT.getInt(info) <= Integer.MAX_VALUE / 1000) {
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail); cancelSignalTimeout = PGProperty.CANCEL_SIGNAL_TIMEOUT.getInt(info) * 1000;
knownStates.put(hostSpec, HostStatus.ConnectFail); } else {
if (hostIter.hasNext()) { LOGGER.debug("integer cancelSignalTimeout is too large, it will occur error after multiply by 1000.");
LOGGER.info("ConnectException occured while connecting to {0}" + hostSpec, cex); }
// still more addresses to try String socketAddress = newStream.getConnectInfo();
continue; LOGGER.info("[" + socketAddress + "] " + "Connection is established. ID: " + connectInfo);
// Do final startup.
QueryExecutor queryExecutor = new QueryExecutorImpl(newStream, user, database,
cancelSignalTimeout, info);
queryExecutor.setProtocolVersion(this.protocolVerion);
//Check MasterCluster or SecondaryCluster
if (PGProperty.PRIORITY_SERVERS.get(info) != null) {
ClusterStatus currentClusterStatus = queryClusterStatus(queryExecutor);
//report cluster status
GlobalClusterStatusTracker.reportClusterStatus(clusterSpec, currentClusterStatus);
if (currentClusterStatus == ClusterStatus.MasterCluster) {
isMasterCluster = true;
//report the main cluster currently found
GlobalClusterStatusTracker.reportMasterCluster(info, clusterSpec);
} else {
queryExecutor.close();
break;
}
}
// Check Master or Secondary
HostStatus hostStatus = HostStatus.ConnectOK;
if (candidateHost.targetServerType != HostRequirement.any) {
hostStatus = isMaster(queryExecutor) ? HostStatus.Master : HostStatus.Secondary;
LOGGER.info("Known status of host " + hostSpec + " is " + hostStatus);
}
GlobalHostStatusTracker.reportHostStatus(hostSpec, hostStatus, info);
knownStates.put(hostSpec, hostStatus);
if (!candidateHost.targetServerType.allowConnectingTo(hostStatus)) {
queryExecutor.close();
continue;
}
//query and update statements cause logical replication to fail, temporarily evade
if (info.getProperty("replication") == null) {
runInitialQueries(queryExecutor, info);
queryExecutor.setGaussdbVersion(queryGaussdbVersion(queryExecutor));
}
if (MultiHostChooser.isUsingAutoLoadBalance(info)) {
QueryCNListUtils.runRereshCNListQueryies(queryExecutor, info);
}
LOGGER.info("Connect complete. ID: " + connectInfo);
// And we're done.
return queryExecutor;
} catch (ConnectException cex) {
// Added by Peter Mount <peter@retep.org.uk>
// ConnectException is thrown when the connection cannot be made.
// we trap this an return a more meaningful message for the end user
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail, info);
knownStates.put(hostSpec, HostStatus.ConnectFail);
if (hostIter.hasNext() || clusterIter.hasNext()) {
LOGGER.info("ConnectException occured while connecting to {0}" + hostSpec, cex);
exception.addSuppressed(cex);
// still more addresses to try
continue;
}
if (exception.getSuppressed().length > 0) {
cex.addSuppressed(exception);
}
throw new PSQLException(GT.tr(
"Connection to {0} refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.",
hostSpec), PSQLState.CONNECTION_UNABLE_TO_CONNECT, cex);
} catch (IOException ioe) {
closeStream(newStream);
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail, info);
knownStates.put(hostSpec, HostStatus.ConnectFail);
if (hostIter.hasNext() || clusterIter.hasNext()) {
LOGGER.info("IOException occured while connecting to " + hostSpec, ioe);
exception.addSuppressed(ioe);
// still more addresses to try
continue;
}
if (exception.getSuppressed().length > 0) {
ioe.addSuppressed(exception);
}
throw new PSQLException(GT.tr("The connection attempt failed."),
PSQLState.CONNECTION_UNABLE_TO_CONNECT, ioe);
} catch (SQLException se) {
closeStream(newStream);
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail, info);
knownStates.put(hostSpec, HostStatus.ConnectFail);
if (hostIter.hasNext() || clusterIter.hasNext()) {
LOGGER.info("SQLException occured while connecting to " + hostSpec, se);
exception.addSuppressed(se);
// still more addresses to try
continue;
}
if (exception.getSuppressed().length > 0) {
se.addSuppressed(exception);
}
throw se;
} }
throw new PSQLException(GT.tr( }
"Connection to {0} refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.", //When the cluster is a production cluster and there is no exception, still unable to return connection, throw an exception
hostSpec), PSQLState.CONNECTION_UNABLE_TO_CONNECT, cex); if (isMasterCluster) {
} catch (IOException ioe) { LOGGER.info("Could not find a server with specified targetServerType: " + targetServerType + ". The current server known status is: " + knownStates.entrySet().toString());
closeStream(newStream); throw new PSQLException(GT
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail); .tr("Could not find a server with specified targetServerType: {0}", targetServerType),
knownStates.put(hostSpec, HostStatus.ConnectFail); PSQLState.CONNECTION_UNABLE_TO_CONNECT);
if (hostIter.hasNext()) { } else {
LOGGER.info("IOException occured while connecting to " + hostSpec, ioe); //update clusterStatus
// still more addresses to try GlobalClusterStatusTracker.reportClusterStatus(clusterSpec, ClusterStatus.ConnectFail);
continue;
}
throw new PSQLException(GT.tr("The connection attempt failed."),
PSQLState.CONNECTION_UNABLE_TO_CONNECT, ioe);
} catch (SQLException se) {
closeStream(newStream);
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail);
knownStates.put(hostSpec, HostStatus.ConnectFail);
if (hostIter.hasNext()) {
LOGGER.info("SQLException occured while connecting to " + hostSpec, se);
// still more addresses to try
continue;
}
throw se;
} }
} }
LOGGER.info("Could not find a server with specified targetServerType: " + targetServerType + ". The current server known status is: " + knownStates.entrySet().toString());
throw new PSQLException(GT if (PGProperty.PRIORITY_SERVERS.get(info) != null) {
.tr("Could not find a server with specified targetServerType: {0}", targetServerType), LOGGER.info("Could not find production cluster");
PSQLState.CONNECTION_UNABLE_TO_CONNECT); throw new PSQLException(GT.tr("Could not find production cluster"), PSQLState.CONNECTION_UNABLE_TO_CONNECT);
} else {
LOGGER.info("Could not find a server with specified targetServerType: " + targetServerType + ". The current server known status is: " + knownStates.entrySet().toString());
throw new PSQLException(GT
.tr("Could not find a server with specified targetServerType: {0}", targetServerType),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
} }
private List<String[]> getParametersForStartup(String user, String database, Properties info) { private List<String[]> getParametersForStartup(String user, String database, Properties info) {
@ -379,6 +420,10 @@ public class ConnectionFactoryImpl extends ConnectionFactory {
if (currentSchema != null) { if (currentSchema != null) {
paramList.add(new String[]{"search_path", currentSchema}); paramList.add(new String[]{"search_path", currentSchema});
} }
if (PGProperty.PG_CLIENT_LOGIC.get(info) != null && PGProperty.PG_CLIENT_LOGIC.get(info).equals("1")) {
paramList.add(new String[]{"enable_full_encryption", "1"});
}
return paramList; return paramList;
} }
@ -804,4 +849,14 @@ public class ConnectionFactoryImpl extends ConnectionFactory {
return ""; return "";
} }
} }
private ClusterStatus queryClusterStatus(QueryExecutor queryExecutor) throws SQLException, IOException {
byte[][] result = SetupQueryRunner.run(queryExecutor, "select barrier_id from gs_get_local_barrier_status();", true);
String barrierId = queryExecutor.getEncoding().decode(result[0]);
if(barrierId != null && !barrierId.equals("")){
return ClusterStatus.SecondaryCluster;
}else{
return ClusterStatus.MasterCluster;
}
}
} }

View File

@ -43,6 +43,7 @@ class SimpleParameterList implements V3ParameterList {
private static final byte BINARY = 4; private static final byte BINARY = 4;
SimpleParameterList(int paramCount, TypeTransferModeRegistry transferModeRegistry) { SimpleParameterList(int paramCount, TypeTransferModeRegistry transferModeRegistry) {
this.paramLiteralValues = new String[paramCount];
this.paramValues = new Object[paramCount]; this.paramValues = new Object[paramCount];
this.paramTypes = new int[paramCount]; this.paramTypes = new int[paramCount];
this.encoded = new byte[paramCount][]; this.encoded = new byte[paramCount][];
@ -125,16 +126,41 @@ class SimpleParameterList implements V3ParameterList {
public void setLiteralParameter(int index, String value, int oid) throws SQLException { public void setLiteralParameter(int index, String value, int oid) throws SQLException {
bind(index, value, oid, TEXT); bind(index, value, oid, TEXT);
saveLiteralValueForClientLogic(index, value);
} }
public void setStringParameter(int index, String value, int oid) throws SQLException { public void setStringParameter(int index, String value, int oid) throws SQLException {
bind(index, value, oid, TEXT); bind(index, value, oid, TEXT);
saveLiteralValueForClientLogic(index, value);
} }
public void setBinaryParameter(int index, byte[] value, int oid) throws SQLException { public void setBinaryParameter(int index, byte[] value, int oid) throws SQLException {
bind(index, value, oid, BINARY); bind(index, value, oid, BINARY);
} }
@Override
public void saveLiteralValueForClientLogic(int index, String value) throws SQLException {
if (index < 1 || index > paramValues.length) {
throw new PSQLException(
GT.tr("The column index is out of range: {0}, number of columns: {1}.",
index, paramValues.length),
PSQLState.INVALID_PARAMETER_VALUE);
}
--index;
this.paramLiteralValues[index] = value;
}
@Override
public String[] getLiteralValues() {
return this.paramLiteralValues;
}
@Override
public void setClientLogicBytea(int index, byte[] data, int offset, int length, int customOid) throws SQLException {
bind(index, new StreamWrapper(data, offset, length), customOid, BINARY);
}
@Override @Override
public void setBytea(int index, byte[] data, int offset, int length) throws SQLException { public void setBytea(int index, byte[] data, int offset, int length) throws SQLException {
bind(index, new StreamWrapper(data, offset, length), Oid.BYTEA, BINARY); bind(index, new StreamWrapper(data, offset, length), Oid.BYTEA, BINARY);
@ -408,6 +434,7 @@ class SimpleParameterList implements V3ParameterList {
public ParameterList copy() { public ParameterList copy() {
SimpleParameterList newCopy = new SimpleParameterList(paramValues.length, transferModeRegistry); SimpleParameterList newCopy = new SimpleParameterList(paramValues.length, transferModeRegistry);
System.arraycopy(paramLiteralValues, 0, newCopy.paramLiteralValues, 0, paramLiteralValues.length);
System.arraycopy(paramValues, 0, newCopy.paramValues, 0, paramValues.length); System.arraycopy(paramValues, 0, newCopy.paramValues, 0, paramValues.length);
System.arraycopy(paramTypes, 0, newCopy.paramTypes, 0, paramTypes.length); System.arraycopy(paramTypes, 0, newCopy.paramTypes, 0, paramTypes.length);
System.arraycopy(flags, 0, newCopy.flags, 0, flags.length); System.arraycopy(flags, 0, newCopy.flags, 0, flags.length);
@ -481,6 +508,7 @@ class SimpleParameterList implements V3ParameterList {
return ts.toString(); return ts.toString();
} }
private final String[] paramLiteralValues;
private final Object[] paramValues; private final Object[] paramValues;
private final int[] paramTypes; private final int[] paramTypes;
private final byte[] flags; private final byte[] flags;

View File

@ -112,6 +112,11 @@ class SimpleQuery implements Query {
return nativeQuery.nativeSql; return nativeQuery.nativeSql;
} }
@Override
public void replaceNativeSqlForClientLogic(String sql) {
nativeQuery.nativeSql = sql;
}
void setStatementName(String statementName, short deallocateEpoch) { void setStatementName(String statementName, short deallocateEpoch) {
assert statementName != null : "statement name should not be null"; assert statementName != null : "statement name should not be null";
this.statementName = statementName; this.statementName = statementName;

View File

@ -694,6 +694,22 @@ public abstract class BaseDataSource implements CommonDataSource, Referenceable
return PGProperty.TARGET_SERVER_TYPE.get(properties); return PGProperty.TARGET_SERVER_TYPE.get(properties);
} }
/**
* @param usingEip using eip
* @see PGProperty#USING_EIP
*/
public void setUsingEip(String usingEip){
PGProperty.USING_EIP.set(properties, usingEip);
}
/**
* @return using eip
* @see PGProperty#USING_EIP
*/
public String getUsingEip(){
return PGProperty.USING_EIP.get(properties);
}
/** /**
* @param loadBalanceHosts load balance hosts * @param loadBalanceHosts load balance hosts
* @see PGProperty#LOAD_BALANCE_HOSTS * @see PGProperty#LOAD_BALANCE_HOSTS

View File

@ -7,49 +7,70 @@ package org.postgresql.hostchooser;
import static java.lang.System.currentTimeMillis; import static java.lang.System.currentTimeMillis;
import org.postgresql.GlobalConnectionTracker;
import org.postgresql.PGProperty;
import org.postgresql.util.HostSpec; import org.postgresql.util.HostSpec;
import java.util.ArrayList; import java.util.*;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/** /**
* Keeps track of HostSpec targets in a global map. * Keeps track of HostSpec targets in a global map.
*/ */
public class GlobalHostStatusTracker { public class GlobalHostStatusTracker {
private static final Map<HostSpec, HostSpecStatus> hostStatusMap = private static final Map<HostSpec, HostSpecStatus> hostStatusMap =
new HashMap<HostSpec, HostSpecStatus>(); new HashMap<HostSpec, HostSpecStatus>();
/** /**
* Store the actual observed host status. * Store the actual observed host status.
* *
* @param hostSpec The host whose status is known. * @param hostSpec The host whose status is known.
* @param hostStatus Latest known status for the host. * @param hostStatus Latest known status for the host.
* @param prop Extra properties controlling the connection.
*/ */
public static void reportHostStatus(HostSpec hostSpec, HostStatus hostStatus) { public static void reportHostStatus(HostSpec hostSpec, HostStatus hostStatus, Properties prop) {
HostStatus originalHostStatus;
long now = currentTimeMillis(); long now = currentTimeMillis();
synchronized (hostStatusMap) { synchronized (hostStatusMap) {
HostSpecStatus hostSpecStatus = hostStatusMap.get(hostSpec); HostSpecStatus hostSpecStatus = hostStatusMap.get(hostSpec);
if (hostSpecStatus == null) { if (hostSpecStatus == null) {
hostSpecStatus = new HostSpecStatus(hostSpec); hostSpecStatus = new HostSpecStatus(hostSpec);
hostStatusMap.put(hostSpec, hostSpecStatus); hostStatusMap.put(hostSpec, hostSpecStatus);
originalHostStatus = hostStatus;
} else {
originalHostStatus = hostSpecStatus.status;
} }
hostSpecStatus.status = hostStatus; hostSpecStatus.status = hostStatus;
hostSpecStatus.lastUpdated = now; hostSpecStatus.lastUpdated = now;
observationState(prop, originalHostStatus, hostStatus, hostSpec);
}
}
/**
* To observe whether the status changes from standby to master
*
* @param prop Extra properties controlling the connection.
* @param originalHostStatus The original state of the current host.
* @param currentHostStatus The current status of the current host.
* @param hostSpec The host whose status is known.
* @return
*/
public static void observationState(Properties prop, HostStatus originalHostStatus, HostStatus currentHostStatus, HostSpec hostSpec) {
if (originalHostStatus == HostStatus.Secondary && currentHostStatus == HostStatus.Master) {
GlobalConnectionTracker.closeOldConnection(hostSpec.toString(), prop);
} }
} }
/** /**
* Returns a list of candidate hosts that have the required targetServerType. * Returns a list of candidate hosts that have the required targetServerType.
* *
* @param hostSpecs The potential list of hosts. * @param hostSpecs The potential list of hosts.
* @param targetServerType The required target server type. * @param targetServerType The required target server type.
* @param hostRecheckMillis How stale information is allowed. * @param hostRecheckMillis How stale information is allowed.
* @return candidate hosts to connect to. * @return candidate hosts to connect to.
*/ */
static List<HostSpec> getCandidateHosts(HostSpec[] hostSpecs, static List<HostSpec> getCandidateHosts(HostSpec[] hostSpecs,
HostRequirement targetServerType, long hostRecheckMillis) { HostRequirement targetServerType, long hostRecheckMillis) {
List<HostSpec> candidates = new ArrayList<HostSpec>(hostSpecs.length); List<HostSpec> candidates = new ArrayList<HostSpec>(hostSpecs.length);
long latestAllowedUpdate = currentTimeMillis() - hostRecheckMillis; long latestAllowedUpdate = currentTimeMillis() - hostRecheckMillis;
synchronized (hostStatusMap) { synchronized (hostStatusMap) {
@ -57,8 +78,8 @@ public class GlobalHostStatusTracker {
HostSpecStatus hostInfo = hostStatusMap.get(hostSpec); HostSpecStatus hostInfo = hostStatusMap.get(hostSpec);
// candidates are nodes we do not know about and the nodes with correct type // candidates are nodes we do not know about and the nodes with correct type
if (hostInfo == null if (hostInfo == null
|| hostInfo.lastUpdated < latestAllowedUpdate || hostInfo.lastUpdated < latestAllowedUpdate
|| targetServerType.allowConnectingTo(hostInfo.status)) { || targetServerType.allowConnectingTo(hostInfo.status)) {
candidates.add(hostSpec); candidates.add(hostSpec);
} }
} }

View File

@ -146,8 +146,16 @@ public class MultiHostChooser implements HostChooser {
*/ */
private List<HostSpec> priorityRoundRobin(List<HostSpec> hostSpecs) { private List<HostSpec> priorityRoundRobin(List<HostSpec> hostSpecs) {
// Obtains the URL CN Host List. // Obtains the URL CN Host List.
List<HostSpec> urlHostSpecs = Arrays.asList(Driver.getURLHostSpecs(info)); List<HostSpec> urlHostSpecs;
int priorityCNNumber = Integer.parseInt(info.getProperty("autoBalance").substring("priority".length())); int priorityCNNumber = Integer.parseInt(info.getProperty("autoBalance").substring("priority".length()));
if(PGProperty.PRIORITY_SERVERS.get(info) != null){
urlHostSpecs = getUrlHostSpecs(hostSpecs);
if(priorityCNNumber > urlHostSpecs.size()){
priorityCNNumber = urlHostSpecs.size();
}
}else{
urlHostSpecs = Arrays.asList(Driver.getURLHostSpecs(info));
}
// Obtain the currently active CN node that is in the priority state. // Obtain the currently active CN node that is in the priority state.
List<HostSpec> priorityURLHostSpecs = getSurvivalPriorityURLHostSpecs(hostSpecs, urlHostSpecs, priorityCNNumber); List<HostSpec> priorityURLHostSpecs = getSurvivalPriorityURLHostSpecs(hostSpecs, urlHostSpecs, priorityCNNumber);
List<HostSpec> nonPriorityHostSpecs = getNonPriorityHostSpecs(hostSpecs, priorityURLHostSpecs); List<HostSpec> nonPriorityHostSpecs = getNonPriorityHostSpecs(hostSpecs, priorityURLHostSpecs);
@ -161,6 +169,20 @@ public class MultiHostChooser implements HostChooser {
} }
} }
// Get the URL string of the current cluster when using disaster recovery switching
private List<HostSpec> getUrlHostSpecs(List<HostSpec> hostSpecs){
HostSpec[] urlHostSpecs = Driver.getURLHostSpecs(info);
Integer index = Integer.valueOf(PGProperty.PRIORITY_SERVERS.get(info));
HostSpec[] imaginaryMasterHostSpec = Arrays.copyOfRange(urlHostSpecs, 0, index);
HostSpec[] imaginarySlaveHostSpec = Arrays.copyOfRange(urlHostSpecs, index, urlHostSpecs.length);
HostSpec[] currentHostSpecs = hostSpecs.toArray(new HostSpec[0]);
if(Arrays.toString(currentHostSpecs).contains(imaginaryMasterHostSpec[0].toString())){
return Arrays.asList(imaginaryMasterHostSpec);
}else{
return Arrays.asList(imaginarySlaveHostSpec);
}
}
// Returns the alive PriorityURL. // Returns the alive PriorityURL.
private List<HostSpec> getSurvivalPriorityURLHostSpecs(List<HostSpec> hostSpecs, List<HostSpec> urlHostSpecs, int priorityCNNumber) { private List<HostSpec> getSurvivalPriorityURLHostSpecs(List<HostSpec> hostSpecs, List<HostSpec> urlHostSpecs, int priorityCNNumber) {
List<HostSpec> priorityURLHostSpecs = new ArrayList<>(); List<HostSpec> priorityURLHostSpecs = new ArrayList<>();

View File

@ -28,31 +28,33 @@ import java.util.List;
* Internal class, it is not a part of public API. * Internal class, it is not a part of public API.
*/ */
public class BatchResultHandler extends ResultHandlerBase { public class BatchResultHandler extends ResultHandlerBase {
private PgStatement pgStatement;
private final PgStatement pgStatement;
private int resultIndex = 0; private int resultIndex = 0;
private final Query[] queries; private final Query[] queries;
private final int[] updateCounts; private final long[] longUpdateCounts;
private final ParameterList[] parameterLists; private final ParameterList[] parameterLists;
private final boolean expectGeneratedKeys; private final boolean expectGeneratedKeys;
private PgResultSet generatedKeys; private PgResultSet generatedKeys;
private int committedRows; // 0 means no rows committed. 1 means row 0 was committed, and so on private int committedRows; // 0 means no rows committed. 1 means row 0 was committed, and so on
private List<List<byte[][]>> allGeneratedRows; private final List<List<byte[][]>> allGeneratedRows;
private List<byte[][]> latestGeneratedRows; private List<byte[][]> latestGeneratedRows;
private PgResultSet latestGeneratedKeysRs; private PgResultSet latestGeneratedKeysRs;
BatchResultHandler(PgStatement pgStatement, Query[] queries, ParameterList[] parameterLists, BatchResultHandler(PgStatement pgStatement, Query[] queries, ParameterList[] parameterLists,
boolean expectGeneratedKeys) { boolean expectGeneratedKeys) {
this.pgStatement = pgStatement; this.pgStatement = pgStatement;
this.queries = queries; this.queries = queries;
this.parameterLists = parameterLists; this.parameterLists = parameterLists;
this.updateCounts = new int[queries.length]; this.longUpdateCounts = new long[queries.length];
this.expectGeneratedKeys = expectGeneratedKeys; this.expectGeneratedKeys = expectGeneratedKeys;
this.allGeneratedRows = !expectGeneratedKeys ? null : new ArrayList<List<byte[][]>>(); this.allGeneratedRows = !expectGeneratedKeys ? null : new ArrayList<List<byte[][]>>();
} }
@Override
public void handleResultRows(Query fromQuery, Field[] fields, List<byte[][]> tuples, public void handleResultRows(Query fromQuery, Field[] fields, List<byte[][]> tuples,
ResultCursor cursor) { ResultCursor cursor) {
// If SELECT, then handleCommandStatus call would just be missing // If SELECT, then handleCommandStatus call would just be missing
resultIndex++; resultIndex++;
if (!expectGeneratedKeys) { if (!expectGeneratedKeys) {
@ -63,8 +65,7 @@ public class BatchResultHandler extends ResultHandlerBase {
try { try {
// If SELECT, the resulting ResultSet is not valid // If SELECT, the resulting ResultSet is not valid
// Thus it is up to handleCommandStatus to decide if resultSet is good enough // Thus it is up to handleCommandStatus to decide if resultSet is good enough
latestGeneratedKeysRs = latestGeneratedKeysRs = (PgResultSet) pgStatement.createResultSet(fromQuery, fields,
(PgResultSet) pgStatement.createResultSet(fromQuery, fields,
new ArrayList<byte[][]>(), cursor); new ArrayList<byte[][]>(), cursor);
} catch (SQLException e) { } catch (SQLException e) {
handleError(e); handleError(e);
@ -73,7 +74,8 @@ public class BatchResultHandler extends ResultHandlerBase {
latestGeneratedRows = tuples; latestGeneratedRows = tuples;
} }
public void handleCommandStatus(String status, int updateCount, long insertOID) { @Override
public void handleCommandStatus(String status, long updateCount, long insertOID) {
if (latestGeneratedRows != null) { if (latestGeneratedRows != null) {
// We have DML. Decrease resultIndex that was just increased in handleResultRows // We have DML. Decrease resultIndex that was just increased in handleResultRows
resultIndex--; resultIndex--;
@ -90,12 +92,12 @@ public class BatchResultHandler extends ResultHandlerBase {
if (resultIndex >= queries.length) { if (resultIndex >= queries.length) {
handleError(new PSQLException(GT.tr("Too many update results were returned."), handleError(new PSQLException(GT.tr("Too many update results were returned."),
PSQLState.TOO_MANY_RESULTS)); PSQLState.TOO_MANY_RESULTS));
return; return;
} }
latestGeneratedKeysRs = null; latestGeneratedKeysRs = null;
updateCounts[resultIndex++] = updateCount; longUpdateCounts[resultIndex++] = updateCount;
} }
private boolean isAutoCommit() { private boolean isAutoCommit() {
@ -125,6 +127,7 @@ public class BatchResultHandler extends ResultHandlerBase {
allGeneratedRows.clear(); allGeneratedRows.clear();
} }
@Override
public void handleWarning(SQLWarning warning) { public void handleWarning(SQLWarning warning) {
pgStatement.addWarning(warning); pgStatement.addWarning(warning);
} }
@ -132,7 +135,7 @@ public class BatchResultHandler extends ResultHandlerBase {
@Override @Override
public void handleError(SQLException newError) { public void handleError(SQLException newError) {
if (getException() == null) { if (getException() == null) {
Arrays.fill(updateCounts, committedRows, updateCounts.length, Statement.EXECUTE_FAILED); Arrays.fill(longUpdateCounts, committedRows, longUpdateCounts.length, Statement.EXECUTE_FAILED);
if (allGeneratedRows != null) { if (allGeneratedRows != null) {
allGeneratedRows.clear(); allGeneratedRows.clear();
} }
@ -142,11 +145,12 @@ public class BatchResultHandler extends ResultHandlerBase {
queryString = queries[resultIndex].toString(parameterLists[resultIndex]); queryString = queries[resultIndex].toString(parameterLists[resultIndex]);
} }
BatchUpdateException batchException = new BatchUpdateException( BatchUpdateException batchException;
GT.tr("Batch entry {0} {1} was aborted: {2} Call getNextException to see other errors in the batch.", batchException = new BatchUpdateException(
resultIndex, queryString, newError.getMessage()), GT.tr("Batch entry {0} {1} was aborted: {2} Call getNextException to see other errors in the batch.",
newError.getSQLState(), uncompressUpdateCount()); resultIndex, queryString, newError.getMessage()),
batchException.initCause(newError); newError.getSQLState(), 0, uncompressLongUpdateCount(), newError);
super.handleError(batchException); super.handleError(batchException);
} }
resultIndex++; resultIndex++;
@ -154,18 +158,21 @@ public class BatchResultHandler extends ResultHandlerBase {
super.handleError(newError); super.handleError(newError);
} }
@Override
public void handleCompletion() throws SQLException { public void handleCompletion() throws SQLException {
updateGeneratedKeys(); updateGeneratedKeys();
SQLException batchException = getException(); SQLException batchException = getException();
if (batchException != null) { if (batchException != null) {
if (isAutoCommit()) { if (isAutoCommit()) {
// Re-create batch exception since rows after exception might indeed succeed. // Re-create batch exception since rows after exception might indeed succeed.
BatchUpdateException newException = new BatchUpdateException( BatchUpdateException newException;
batchException.getMessage(), newException = new BatchUpdateException(
batchException.getSQLState(), batchException.getMessage(),
uncompressUpdateCount() batchException.getSQLState(), 0,
uncompressLongUpdateCount(),
batchException.getCause()
); );
newException.initCause(batchException.getCause());
SQLException next = batchException.getNextException(); SQLException next = batchException.getNextException();
if (next != null) { if (next != null) {
newException.setNextException(next); newException.setNextException(next);
@ -181,8 +188,21 @@ public class BatchResultHandler extends ResultHandlerBase {
} }
private int[] uncompressUpdateCount() { private int[] uncompressUpdateCount() {
long[] original = uncompressLongUpdateCount();
int[] copy = new int[original.length];
for (int i = 0; i < original.length; i++) {
copy[i] = original[i] > Integer.MAX_VALUE ? Statement.SUCCESS_NO_INFO : (int) original[i];
}
return copy;
}
public int[] getUpdateCount() {
return uncompressUpdateCount();
}
private long[] uncompressLongUpdateCount() {
if (!(queries[0] instanceof BatchedQuery)) { if (!(queries[0] instanceof BatchedQuery)) {
return updateCounts; return longUpdateCounts;
} }
int totalRows = 0; int totalRows = 0;
boolean hasRewrites = false; boolean hasRewrites = false;
@ -192,7 +212,7 @@ public class BatchResultHandler extends ResultHandlerBase {
hasRewrites |= batchSize > 1; hasRewrites |= batchSize > 1;
} }
if (!hasRewrites) { if (!hasRewrites) {
return updateCounts; return longUpdateCounts;
} }
/* In this situation there is a batch that has been rewritten. Substitute /* In this situation there is a batch that has been rewritten. Substitute
@ -200,12 +220,12 @@ public class BatchResultHandler extends ResultHandlerBase {
* indicate successful completion for each row the driver client added * indicate successful completion for each row the driver client added
* to the batch. * to the batch.
*/ */
int[] newUpdateCounts = new int[totalRows]; long[] newUpdateCounts = new long[totalRows];
int offset = 0; int offset = 0;
for (int i = 0; i < queries.length; i++) { for (int i = 0; i < queries.length; i++) {
Query query = queries[i]; Query query = queries[i];
int batchSize = query.getBatchSize(); int batchSize = query.getBatchSize();
int superBatchResult = updateCounts[i]; long superBatchResult = longUpdateCounts[i];
if (batchSize == 1) { if (batchSize == 1) {
newUpdateCounts[offset++] = superBatchResult; newUpdateCounts[offset++] = superBatchResult;
continue; continue;
@ -221,7 +241,7 @@ public class BatchResultHandler extends ResultHandlerBase {
return newUpdateCounts; return newUpdateCounts;
} }
public int[] getUpdateCount() { public long[] getLargeUpdateCount() {
return uncompressUpdateCount(); return uncompressLongUpdateCount();
} }
} }

View File

@ -0,0 +1,337 @@
package org.postgresql.jdbc;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.atomic.AtomicInteger;
import org.postgresql.PGProperty;
import org.postgresql.core.NativeQuery;
import org.postgresql.core.Parser;
import org.postgresql.log.Log;
import org.postgresql.log.Logger;
/**
*
* Captures the client logic functionality to be exposed in the JDBC driver
* It uses the JNI interface as a back-end using the ClientLogicImpl class
*
*/
public class ClientLogic {
static final int ERROR_BAD_RESPONSE_NULL = 1001; /* using large numbers to avoid collision with libpq error code numbers */
static final int ERROR_BAD_RESPONSE_1 = 1002;
static final int ERROR_BAD_RESPONSE_2 = 1003;
static final int ERROR_BAD_RESPONSE_3 = 1004;
static final String ERROR_TEXT_BAD_RESPONSE = "Bad response from backend componenet";
static final int ERROR_CONNECTION_IS_OPEN = 1005;
static final String ERROR_TEXT_CONNECTION_IS_OPEN = "client logic connection is already opened";
static final int ERROR_INVALID_HANDLE = 1010;
static final String ERROR_TEXT_INVALID_HANDLE = "Invalid handle";
static final int ERROR_EMPTY_DATA = 1012;
static final String ERROR_TEXT_EMPTY_DATA = "Empty data";
static final int ERROR_PARSER_FAILURE = 1013;
static final String ERROR_TEXT_PARSER_FAILURE = "Failed parsing the input query";
private AtomicInteger stamentNameCounter = new AtomicInteger(0);
private static Log LOGGER = Logger.getLogger(ClientLogic.class.getName());
/**
* CLientLogicStatus class is responsible for parsing the result from the gauss_cl_jni library
*/
static private class CLientLogicStatus{
private int errorCode = 0;
private String errorText = "";
public CLientLogicStatus(Object[] JNIResult) {
if (JNIResult == null) {
errorCode = ERROR_BAD_RESPONSE_NULL;
errorText = ERROR_TEXT_BAD_RESPONSE;
return;
}
if (JNIResult.length < 1) {
errorCode = ERROR_BAD_RESPONSE_1;
errorText = ERROR_TEXT_BAD_RESPONSE;
return;
}
if (JNIResult[0] == null) {
errorCode = ERROR_BAD_RESPONSE_3;
errorText = ERROR_TEXT_BAD_RESPONSE;
return;
}
Object[] statusData = (Object[]) JNIResult[0];
if (statusData.length < 2) {
errorCode = ERROR_BAD_RESPONSE_2;
errorText = ERROR_TEXT_BAD_RESPONSE;
return;
}
if (statusData[0] != null) {
errorText = (String)statusData[1];
if (statusData[0].getClass().isArray()) {
int[] errorCodeArr = (int[])statusData[0];
if (errorCodeArr.length > 0) {
errorCode = (int)errorCodeArr[0];
}
}
}
if (statusData[1] != null) {
errorText = (String)statusData[1];
}
}
/**
*
* @return if true if the current request is OK and false if it is failed
*/
public boolean isOK() {
return errorCode == 0;
}
@Override
public String toString() {
return "CLientLogicStatus [error code=" + errorCode + ", error text=" + errorText + "]";
}
public int getErrorCode() {
return errorCode;
}
public String getErrorText() {
return errorText;
}
}
private ClientLogicImpl impl = null;
public ClientLogic() {
}
/**
* Closes the connection to the database
*/
public void close() {
if (impl != null) {
impl.close();
impl = null;
}
}
/**
* if finalize called make sure the connectiion is cloased
*/
protected void finalize() {
close();
}
/**
* Link the client logic JNI & C side to the PgConnection instance
* @param databaseName database name
* @param jdbcConn the JDBC connection object to be used to select cache data with
* @throws ClientLogicException
*/
public void linkClientLogic(String databaseName, PgConnection jdbcConn) throws ClientLogicException {
impl = new ClientLogicImpl();
Object[] result = impl.linkClientLogic(databaseName, jdbcConn);
CLientLogicStatus status = new CLientLogicStatus(result);
if (!status.isOK()) {
throw new ClientLogicException(status.getErrorCode(), status.getErrorText());
}
long handle = 0;
if (result.length > 1) {
if (result.getClass().isArray()) {
long[] handleArr = (long[]) result[1];
if (handleArr.length > 0) {
handle = handleArr[0];
}
}
}
if (handle == 0 || handle < 0) {
throw new ClientLogicException(ERROR_INVALID_HANDLE, ERROR_TEXT_INVALID_HANDLE);
}
impl.setHandle(handle);
}
/**
* Runs the pre-process function of the client logic
* to change the client logic fields from the user input to the client logic format
* @param originalQuery query to modify with user input
* @return the modified query with client logic fields changed
* @throws ClientLogicException
*/
public String runQueryPreProcess(String originalQuery) throws ClientLogicException {
Object[] result = impl.runQueryPreProcess(originalQuery);
CLientLogicStatus status = new CLientLogicStatus(result);
if (!status.isOK()) {
String errorText = status.getErrorText();
//Remove trailing \n as it fails the tests
if (errorText.length() > 1) {
if (errorText.charAt(errorText.length() - 1) == '\n') {
errorText = errorText.substring(0, errorText.length() - 1);
}
throw new ClientLogicException(status.getErrorCode(), errorText);
}
}
String resultQuery = "";
if (result.length > 1) {
if (result[1] != null) {
resultQuery = (String) result[1];
}
}
if (resultQuery.length() == 0) {
resultQuery = originalQuery;//Empty query received, just ignore
}
return resultQuery;
}
/**
* Runs the post query function - executed after the query has completed
* @throws ClientLogicException
*/
public void runQueryPostProcess() throws ClientLogicException {
Object[] result = impl.runQueryPostProcess();
CLientLogicStatus status = new CLientLogicStatus(result);
if (!status.isOK()) {
throw new ClientLogicException(status.getErrorCode(), status.getErrorText());
}
}
/**
* Runs client logic on fields to get back the user format
* @param data2Process the client logic data
* @param dataType the OID of the user data
* @return the data in user format
* @throws ClientLogicException
*/
public String runClientLogic(String data2Process, int dataType) throws ClientLogicException {
Object[] result = impl.runClientLogic(data2Process, dataType);
CLientLogicStatus status = new CLientLogicStatus(result);
if (!status.isOK()) {
throw new ClientLogicException(status.getErrorCode(), status.getErrorText());
}
String resultData = "";
if (result.length > 1) {
if (result[1] != null) {
resultData = (String) result[1];
}
}
if (resultData.length() == 0) {
throw new ClientLogicException(ERROR_EMPTY_DATA, ERROR_TEXT_EMPTY_DATA);
}
return resultData;
}
/**
* @return true if the client logic instance is active (connected to the database) and false if it is not
*/
public boolean isActive() {
if (impl != null && impl.getHandle() > 0) {
return true;
}
return false;
}
/**
* Check if oid of returned type is a client logic field
* @param dataType oid of data type
* @return true for client logic and false if it is not
*/
public static boolean isClientLogicField(int dataType) {
return (dataType == 4402 || dataType == 4403);
}
/**
* prepare a query
* @param query the query SQL
* @param statement_name the statement name
* @return the modified query
* @throws ClientLogicException
*/
public String prepareQuery(String query, String statement_name) throws ClientLogicException {
//Replace the query syntax from jdbc syntax with ? for bind parameters to $1 $2 ... $n
List<NativeQuery> queries;
try {
queries = Parser.parseJdbcSql(query, true, true, true, true);
} catch (SQLException e) {
throw new ClientLogicException(ERROR_PARSER_FAILURE, ERROR_TEXT_PARSER_FAILURE, true);
}
String queryNative;
int parameter_count;
if (queries.size() > 0 && queries.get(0) != null) {
queryNative = queries.get(0).nativeSql;
parameter_count = queries.get(0).bindPositions.length;
} else {
throw new ClientLogicException(ERROR_PARSER_FAILURE, ERROR_TEXT_PARSER_FAILURE, true);
}
/*Run the actual query */
Object[] result = impl.prepareQuery(queryNative, statement_name, parameter_count);
CLientLogicStatus status = new CLientLogicStatus(result);
if (!status.isOK()) {
throw new ClientLogicException(status.getErrorCode(), status.getErrorText());
}
String modifiedQuery = "";
if (result.length > 1) {
if (result[1] != null) {
modifiedQuery = (String) result[1];
}
}
if (modifiedQuery.length() == 0) {
throw new ClientLogicException(ERROR_EMPTY_DATA, ERROR_TEXT_EMPTY_DATA, true);
}
return modifiedQuery;
}
/**
* Replaces client logic parameter values in a prepared statement
* @param statementName the name of the statement - the one used in the prepareQuery method
* @param paramValues the list of current values
* @return list of modified parameters
* @throws ClientLogicException
*/
public List<String> replaceStatementParams(String statementName, List<String> paramValues) throws ClientLogicException {
String[] arrParams = new String[paramValues.size()];
for (int i = 0; i < paramValues.size(); ++i) {
arrParams[i] = paramValues.get(i);
}
Object[] resultImpl = impl.replaceStatementParams(statementName, arrParams);
CLientLogicStatus status = new CLientLogicStatus(resultImpl);
if (!status.isOK()) {
throw new ClientLogicException(status.getErrorCode(), status.getErrorText());
}
if (resultImpl[1] == null || !resultImpl[1].getClass().isArray()) {
throw new ClientLogicException(ERROR_EMPTY_DATA, ERROR_TEXT_EMPTY_DATA);
}
Object[] resultsImplArr = (Object[])resultImpl[1];
List<String> result = new ArrayList<>();
for (int i = 0; i < resultsImplArr.length; ++i) {
Object value = resultsImplArr[i];
if (value != null && value.getClass().equals(String.class)) {
result.add((String)value);
} else {
result.add(null);
}
}
return result;
}
/**
* @return unique statement name that can be used for prepare statement.
*/
public String getStatementName() {
int stamanetCounter = stamentNameCounter.incrementAndGet();
return "statament_" + stamanetCounter;
}
/**
* Applies client logic to extract client logic data inside error messages into user input
* @param originalMessage the original message with client logic data
* @return
*/
public String clientLogicMessage(String originalMessage) {
String newMessage = originalMessage;
Object[] result = impl.replaceErrorMessage(originalMessage);
CLientLogicStatus status = new CLientLogicStatus(result);
if (!status.isOK()) {
return newMessage; // equal to the original message in this case
}
if (result.length > 1) {
if (result[1] != null) {
newMessage = (String) result[1];
if (newMessage.length() == 0) { // If it is empty do not use it
newMessage = originalMessage;
}
}
}
return newMessage;
}
} // End of class

View File

@ -0,0 +1,65 @@
package org.postgresql.jdbc;
import java.sql.SQLException;
public class ClientLogicException extends SQLException {
/**
* The serial version UID for jvm serialization
*/
private static final long serialVersionUID = 1148581951725038527L;
private int errorCode;
private String errorText;
/*
* Need to specify if the error is parsing error
* we should not block bad queries to be sent to the server
* PgConnection.isValid is based on error that is not parsed correctly
*/
private boolean parsingError = false;
/**
* @param errorCode - JNI lib error code
* @param errorMessage - JNI lib error message
*/
public ClientLogicException(int errorCode, String errorText) {
super(errorText);
this.errorCode = errorCode;
this.errorText = errorText;
}
/**
* sets the parser error flag
* @param errorCode - JNI lib error code
* @param errorMessage - JNI lib error message
* @param parsingError - if the current issue is parsing issue
*/
public ClientLogicException(int errorCode, String errorText, boolean parsingError) {
super(errorText);
this.errorCode = errorCode;
this.errorText = errorText;
this.parsingError = parsingError;
}
/**
* @return error code
*/
public int getErrorCode() {
return errorCode;
}
/**
* @return error mesage
*/
public String getErrorText() {
return errorText;
}
/**
* @return check whether parsing error
*/
public boolean isParsingError() {
return parsingError;
}
}

View File

@ -0,0 +1,197 @@
package org.postgresql.jdbc;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import org.postgresql.util.JdbcBlackHole;
public class ClientLogicImpl {
static {
System.loadLibrary("gauss_cl_jni");
}
// Native methods:
private native Object[] linkClientLogicImpl(String databaseName);
private native Object[] runQueryPreProcessImpl(long handle, String originalQuery);
private native Object[] runQueryPostProcessImpl(long handle);
private native Object[] runClientLogicImpl(long handle, String processData, int dataType);
private native Object[] prepareQueryImpl(long handle, String query, String statement_name, int parameter_count);
private native Object[] replaceStatementParamsImpl(long handle, String statementName, String[] param_values);
private native Object[] replaceErrorMessageImpl(long handle, String originalMessage);
private native void destroy(long handle);
private long m_handle = 0;
private PgConnection m_jdbcConn = null;
/**
* Link between the Java PgConnection and the PGConn Client logic object
* @param databaseName the database name
* @param jdbcConn the JDBC connection to be later use to fetch data
* @return
*[0][0] - int status code - zero for success
*[0][1] - string status description
*[1] - long - instance handle to be re-used in future calls by the same connection
*/
public Object[] linkClientLogic(String databaseName, PgConnection jdbcConn) {
if (m_handle == 0 && m_jdbcConn == null) {
m_jdbcConn = jdbcConn;
return linkClientLogicImpl(databaseName);
}
else {
//did not add much logic here, will handle it on the parent class
return new Object[]{};
}
}
/**
* Run the pre query, to replace client logic field values with binary format before sending the query to the database server
* @param originalQuery the query with potentially client logic values in user format
* @return array of objects
*[0][0] - int status code - zero for success
*[0][1] - string status description
*[1] - String - The modified query
*/
public Object[] runQueryPreProcess(String originalQuery) {
return runQueryPreProcessImpl(m_handle, originalQuery);
}
/**
* Replace client logic field value with user input - used when receiving data in a resultset
* @param processData the data in binary format (hexa)
* @param dataType the oid (modid) of the original field type
* @return array of objects
*[0][0] - int status code - zero for success
*[0][1] - string status description
*[1] - String - The data in user format
*/
public Object[] runClientLogic(String processData, int dataType) {
return runClientLogicImpl(m_handle, processData, dataType);
}
/**
* run post process on the backend, to free the client logic state machine when a query is done
* @return array of objects
*[0][0] - int status code - zero for success
*[0][1] - string status description
*/
public Object[] runQueryPostProcess() {
return runQueryPostProcessImpl(m_handle);
}
/**
* Prepare a statement on the backend side
* @param query the statement query text
* @param statement_name the name on the the statement
* @param parameter_count number of parameters in the query
* @return array of objects
*[0][0] - int status code - zero for success
*[0][1] - string status description
*[1] - String - The modified query - to be used if the query had client logic fields in user format that have to be replaces with binary value
*/
public Object[] prepareQuery(String query, String statement_name, int parameter_count){
return prepareQueryImpl(m_handle, query, statement_name, parameter_count);
}
/**
* replace parameters values in prepared statement - to be called before binding the parameters and executing the statement
* @param statementName the name of the statement
* @param paramValues array of parameters in user format
* @return array of objects
*[0][0] - int status code - zero for success
*[0][1] - string status description
*[1][0 ... parameter_count - 1] - array with the parameters value, if the parameter is not being replace a NULL apears otherwise the replaced value
*/
public Object[] replaceStatementParams(String statementName, String[] paramValues) {
return replaceStatementParamsImpl(m_handle, statementName, paramValues);
}
/**
* Replace binary client logic values in an error message back to user input format
* For example changes unique constraint error message from:
* ... Key (name)=(\xa1d4....) already exists. ...
* to:
* ... Key (name)=(John) already exists. ...
* @param originalMessage the error message received from the server
* @return array of objects
*[0][0] - int status code - zero for success
*[0][1] - string status description
*[1] - String - The modified message, may return empty string if the message has no client loic values in it
*/
public Object[] replaceErrorMessage(String originalMessage) {
return replaceErrorMessageImpl(m_handle, originalMessage);
}
/**
* closes the backend libpq connection - must be called when the java connectyion is closed.
*/
protected void close() {
if (m_handle > 0) {
destroy(m_handle);
}
m_handle = 0;
}
/**
* Standard java method
*/
protected void finalize() {
close();
}
/**
* setter function to set the handle
* @param handle
*/
public void setHandle(long handle) {
m_handle = handle;
}
/**
* getter function to get the handler
* @return
*/
public long getHandle() {
return m_handle;
}
/**
* This method is being invoked from the client logic c++ code
* It is used to fetch data from the server regarding the client logic settings - cache manager
* @param query the query to incoke
* @return array of results in the following format
* [0] - array of column headers
* [1...n] - array of results
*/
public Object[] fetchDataFromQuery(String query) {
PgStatement st = null;
List<Object> data = new ArrayList<>();
data.add(""); //No Error
try {
st = (PgStatement)m_jdbcConn.createStatement();
ResultSet rs = st.executeQueryWithNoCL(query);
ResultSetMetaData rsmd = rs.getMetaData();
int columnsCount = rsmd.getColumnCount();
List<String> headers = new ArrayList<>();
//fill data in headers
for (int colIndex = 0; colIndex < columnsCount; ++colIndex) {
headers.add(rsmd.getColumnName(colIndex+1));
}
data.add(headers.toArray());
while (rs.next()){
List<String> record = new ArrayList<>();
for (int colIndex = 1; colIndex < columnsCount + 1; ++colIndex) {
String colValue = rs.getString(colIndex);
if (colValue == null) {
colValue = "";
}
record.add(colValue);
}
data.add(record.toArray());
}
st.close();
}
catch (SQLException e) {
List<Object> errorResponse = new ArrayList<>();
errorResponse.add(e.getMessage());
return errorResponse.toArray();
} finally {
JdbcBlackHole.close(st);
}
return data.toArray();
}
}

View File

@ -59,18 +59,20 @@ public class FieldMetadata implements CanEstimateSize {
final String schemaName; final String schemaName;
final int nullable; final int nullable;
final boolean autoIncrement; final boolean autoIncrement;
final int clientLogicOriginalMod;
public FieldMetadata(String columnName) { public FieldMetadata(String columnName) {
this(columnName, "", "", PgResultSetMetaData.columnNullableUnknown, false); this(columnName, "", "", PgResultSetMetaData.columnNullableUnknown, false, 0);
} }
FieldMetadata(String columnName, String tableName, String schemaName, int nullable, FieldMetadata(String columnName, String tableName, String schemaName, int nullable,
boolean autoIncrement) { boolean autoIncrement, int clientLogicOriginalMod) {
this.columnName = columnName; this.columnName = columnName;
this.tableName = tableName; this.tableName = tableName;
this.schemaName = schemaName; this.schemaName = schemaName;
this.nullable = nullable; this.nullable = nullable;
this.autoIncrement = autoIncrement; this.autoIncrement = autoIncrement;
this.clientLogicOriginalMod = clientLogicOriginalMod;
} }
public long getSize() { public long getSize() {
@ -78,7 +80,8 @@ public class FieldMetadata implements CanEstimateSize {
+ tableName.length() * 2 + tableName.length() * 2
+ schemaName.length() * 2 + schemaName.length() * 2
+ 4L + 4L
+ 1L; + 1L
+ 4L; //for clientLogic data
} }
@Override @Override

View File

@ -138,6 +138,9 @@ class PgCallableStatement extends PgPreparedStatement implements CallableStateme
callResult[j] = typeCompatibility.convert(callResult[j]); callResult[j] = typeCompatibility.convert(callResult[j]);
} }
} }
else if ( columnType == Types.BLOB && functionReturnType[j] == Types.OTHER )
{
}
else { else {
throw new PSQLException(GT.tr( throw new PSQLException(GT.tr(
"A CallableStatement function was executed and the out parameter {0} was of type {1} however type {2} was registered.", "A CallableStatement function was executed and the out parameter {0} was of type {1} however type {2} was registered.",
@ -196,6 +199,9 @@ class PgCallableStatement extends PgPreparedStatement implements CallableStateme
case Types.BOOLEAN: case Types.BOOLEAN:
sqlType = Types.BIT; sqlType = Types.BIT;
break; break;
case Types.NCHAR:
sqlType = Types.CHAR;
break;
default: default:
break; break;
} }
@ -489,10 +495,11 @@ class PgCallableStatement extends PgPreparedStatement implements CallableStateme
} }
public Clob getClob(int i) throws SQLException { public Clob getClob(int i) throws SQLException {
Clob clob = new PGClob(); PGClob pgClob = (PGClob) callResult[i-1];
clob.setString(1, getString(i)); String str = pgClob.getSubString(1, (int) pgClob.length());
return clob; Clob clob = new PGClob();
clob.setString(1, str);
return clob;
} }
public Object getObjectImpl(int i, Map<String, Class<?>> map) throws SQLException { public Object getObjectImpl(int i, Map<String, Class<?>> map) throws SQLException {
@ -659,7 +666,7 @@ class PgCallableStatement extends PgPreparedStatement implements CallableStateme
} }
public String getNString(int parameterIndex) throws SQLException { public String getNString(int parameterIndex) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getNString(int)"); return this.getString(parameterIndex);
} }
public String getNString(String parameterName) throws SQLException { public String getNString(String parameterName) throws SQLException {

View File

@ -91,6 +91,20 @@ public class PgConnection implements BaseConnection {
private static final SQLPermission SQL_PERMISSION_ABORT = new SQLPermission("callAbort"); private static final SQLPermission SQL_PERMISSION_ABORT = new SQLPermission("callAbort");
private static final SQLPermission SQL_PERMISSION_NETWORK_TIMEOUT = new SQLPermission("setNetworkTimeout"); private static final SQLPermission SQL_PERMISSION_NETWORK_TIMEOUT = new SQLPermission("setNetworkTimeout");
private static final Map<String,String> CONNECTION_INFO_REPORT_BLACK_LIST;
static {
CONNECTION_INFO_REPORT_BLACK_LIST = new HashMap<>();
CONNECTION_INFO_REPORT_BLACK_LIST.put("user","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("sslcert","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("password","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("sslkey","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("sslpassword","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("PGHOSTURL","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("PGPORTURL","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("PGPORT","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("PGHOST","");
CONNECTION_INFO_REPORT_BLACK_LIST.put("PGDBNAME","");
}
// //
// Data initialized on construction: // Data initialized on construction:
@ -110,6 +124,8 @@ public class PgConnection implements BaseConnection {
/* Query that runs ROLLBACK */ /* Query that runs ROLLBACK */
private final Query rollbackQuery; private final Query rollbackQuery;
private ClientLogic clientLogic = null;
private final TypeInfo _typeCache; private final TypeInfo _typeCache;
private boolean disableColumnSanitiser = false; private boolean disableColumnSanitiser = false;
@ -243,7 +259,6 @@ public class PgConnection implements BaseConnection {
LOGGER.warn("Unsupported Server Version: " + queryExecutor.getServerVersion()); LOGGER.warn("Unsupported Server Version: " + queryExecutor.getServerVersion());
} }
Set<Integer> binaryOids = getBinaryOids(info); Set<Integer> binaryOids = getBinaryOids(info);
// split for receive and send for better control // split for receive and send for better control
Set<Integer> useBinarySendForOids = new HashSet<Integer>(binaryOids); Set<Integer> useBinarySendForOids = new HashSet<Integer>(binaryOids);
@ -307,6 +322,8 @@ public class PgConnection implements BaseConnection {
_typeCache.addCoreType("uuid", Oid.UUID, Types.OTHER, "java.util.UUID", Oid.UUID_ARRAY); _typeCache.addCoreType("uuid", Oid.UUID, Types.OTHER, "java.util.UUID", Oid.UUID_ARRAY);
_typeCache.addCoreType("xml", Oid.XML, Types.SQLXML, "java.sql.SQLXML", Oid.XML_ARRAY); _typeCache.addCoreType("xml", Oid.XML, Types.SQLXML, "java.sql.SQLXML", Oid.XML_ARRAY);
} }
_typeCache.addCoreType("clob", Oid.CLOB, Types.CLOB, "java.sql.CLOB", Oid.UNSPECIFIED);
_typeCache.addCoreType("blob", Oid.BLOB, Types.BLOB, "java.sql.BLOB", Oid.UNSPECIFIED);
this._clientInfo = new Properties(); this._clientInfo = new Properties();
if (haveMinimumServerVersion(ServerVersion.v9_0)) { if (haveMinimumServerVersion(ServerVersion.v9_0)) {
@ -355,7 +372,7 @@ public class PgConnection implements BaseConnection {
// Done to scan all GUC results here, and begin to do some initialization. // Done to scan all GUC results here, and begin to do some initialization.
if (useConnectionInfo) { if (useConnectionInfo) {
String connectionInfo = getConnectionInfo(useConnectionExtraInfo); String connectionInfo = getConnectionInfo(useConnectionExtraInfo, info);
String setConnectionInfoSql = "set connection_info = '" + connectionInfo.replace("'", "''") + "'"; String setConnectionInfoSql = "set connection_info = '" + connectionInfo.replace("'", "''") + "'";
if (!setConnectionInfoSql.contains(";")) { if (!setConnectionInfoSql.contains(";")) {
stmtSetGuc = createStatement(); stmtSetGuc = createStatement();
@ -426,6 +443,43 @@ public class PgConnection implements BaseConnection {
batchInsert = false; batchInsert = false;
} }
initClientLogic(info);
}
/**
* Link the client logic JNI
* @param info connection settings
* @throws SQLException
*/
private void initClientLogic(Properties info) throws SQLException {
//Connecting the Client Logic so
if (PGProperty.PG_CLIENT_LOGIC.get(info) != null && PGProperty.PG_CLIENT_LOGIC.get(info).equals("1")) {
String autoBalance = info.getProperty("autoBalance");
String targetType = info.getProperty("targetServerType");
if ((autoBalance != null && !autoBalance.equals("false")) || (targetType != null)) {
LOGGER.error("[client encryption] Failed connecting to client logic as autobalance or targetType is set");
clientLogic = null;
throw new PSQLException(
GT.tr("Failed connecting to client logic"),
PSQLState.INVALID_PARAMETER_VALUE);
}
LOGGER.trace("Initiating client logic");
try {
clientLogic = new ClientLogic();
String databaseName = PGProperty.PG_DBNAME.get(info);
clientLogic.linkClientLogic(databaseName, this);
}
catch (ClientLogicException e) {
clientLogic = null;
LOGGER.error("Failed connecting to client logic");
throw new PSQLException(
GT.tr("Failed connecting to client logic" + e.getMessage()),
PSQLState.INVALID_PARAMETER_VALUE);
}
}
else {
LOGGER.trace("Client logic is off");
}
} }
private static Set<Integer> getBinaryOids(Properties info) throws PSQLException { private static Set<Integer> getBinaryOids(Properties info) throws PSQLException {
@ -486,30 +540,63 @@ public class PgConnection implements BaseConnection {
return sb.toString(); return sb.toString();
} }
private String getConnectionInfo(boolean withExtraInfo) { private String getConnectionInfo(boolean withExtraInfo, Properties info) {
String connectionInfo = ""; String connectionInfo = "";
String gsVersion = null; String gsVersion = null;
String driverPath = null; String driverPath = null;
String OSUser = null; String OSUser = null;
try { String urlConfiguration = null;
gsVersion = Driver.getGSVersion(); gsVersion = Driver.getGSVersion();
if (withExtraInfo) { if (withExtraInfo) {
OSUser = System.getProperty("user.name"); OSUser = System.getProperty("user.name");
try {
File jarDir = new File(Driver.class.getProtectionDomain().getCodeSource().getLocation().toURI()); File jarDir = new File(Driver.class.getProtectionDomain().getCodeSource().getLocation().toURI());
driverPath = jarDir.getCanonicalPath(); driverPath = jarDir.getCanonicalPath();
} catch (URISyntaxException | IOException | IllegalArgumentException e) {
driverPath = "";
LOGGER.trace("Failed to make connection_info as there is an exception: " + e.getMessage());
} }
connectionInfo = "{" + "\"driver_name\":\"JDBC\"," + "\"driver_version\":\"" urlConfiguration = reassembleUrl(info);
+ gsVersion.replace("\"", "\\\"") + "\"";
if (withExtraInfo && driverPath != null) {
connectionInfo += ",\"driver_path\":\"" + driverPath.replace("\\", "\\\\").replace("\"", "\\\"") + "\","
+ "\"os_user\":\"" + OSUser.replace("\"", "\\\"") + "\"";
}
connectionInfo += "}";
return connectionInfo;
} catch (URISyntaxException | IOException e) {
LOGGER.trace("Failed to make connection_info as there is an exception: " + e.getMessage());
} }
return ""; connectionInfo = "{" + "\"driver_name\":\"JDBC\"," + "\"driver_version\":\""
+ gsVersion.replace("\"", "\\\"") + "\"";
if (withExtraInfo && driverPath != null) {
connectionInfo += ",\"driver_path\":\"" + driverPath.replace("\\", "\\\\").replace("\"", "\\\"") + "\","
+ "\"os_user\":\"" + OSUser.replace("\"", "\\\"") + "\","
+ "\"urlConfiguration\":\"" + urlConfiguration.replace("\"", "\\\"") + "\"";
}
connectionInfo += "}";
return connectionInfo;
}
private String reassembleUrl(Properties info) {
StringBuffer urlConfiguration = new StringBuffer();
if (creatingURL.startsWith("jdbc:postgresql:")) {
urlConfiguration.append("jdbc:postgresql://");
} else if (creatingURL.startsWith("jdbc:dws:iam:")) {
urlConfiguration.append("jdbc:dws:iam://");
} else if (creatingURL.startsWith("jdbc:opengauss:")) {
urlConfiguration.append("jdbc:opengauss://");
} else {
urlConfiguration.append("jdbc:gaussdb://");
}
String[] ports = info.getProperty("PGPORTURL").split(",");
String[] hosts = info.getProperty("PGHOSTURL").split(",", ports.length);
for (int i = 0; i < hosts.length; ++i) {
urlConfiguration.append(hosts[i] + ":" + ports[i] + ",");
}
urlConfiguration.deleteCharAt(urlConfiguration.length() - 1);
urlConfiguration.append("/" + info.getProperty("PGDBNAME") + "?");
for (String propertyName : info.stringPropertyNames()) {
if(CONNECTION_INFO_REPORT_BLACK_LIST.get(propertyName) == null){
urlConfiguration.append(propertyName+"="+info.getProperty(propertyName)+"&");
}
}
urlConfiguration.deleteCharAt(urlConfiguration.length() - 1);
return urlConfiguration.toString();
} }
private final TimestampUtils timestampUtils; private final TimestampUtils timestampUtils;
@ -812,6 +899,14 @@ public class PgConnection implements BaseConnection {
} }
} }
@Override
/**
* Returns reffrence to the client logic object, if the feature is off null is returned
*/
public ClientLogic getClientLogic() {
return clientLogic;
}
/** /**
* <B>Note:</B> even though {@code Statement} is automatically closed when it is garbage * <B>Note:</B> even though {@code Statement} is automatically closed when it is garbage
* collected, it is better to close it explicitly to lower resource consumption. * collected, it is better to close it explicitly to lower resource consumption.
@ -820,6 +915,10 @@ public class PgConnection implements BaseConnection {
*/ */
@Override @Override
public void close() throws SQLException { public void close() throws SQLException {
if (clientLogic != null) {
clientLogic.close();
clientLogic = null;
}
if (queryExecutor == null) { if (queryExecutor == null) {
// This might happen in case constructor throws an exception (e.g. host being not available). // This might happen in case constructor throws an exception (e.g. host being not available).
// When that happens the connection is still registered in the finalizer queue, so it gets finalized // When that happens the connection is still registered in the finalizer queue, so it gets finalized

View File

@ -1451,6 +1451,15 @@ public class PgDatabaseMetaData implements DatabaseMetaData {
public ResultSet getColumns(String catalog, String schemaPattern, String tableNamePattern, public ResultSet getColumns(String catalog, String schemaPattern, String tableNamePattern,
String columnNamePattern) throws SQLException { String columnNamePattern) throws SQLException {
/* Adding client logic code to show the real data type and not the client logic data types */
boolean isClientLogicOn = false;
String clientLogicSelectClause = " ";
String clientLogicFromClause = " ";
if (connection.getClientLogic() != null) {
isClientLogicOn = true;
clientLogicSelectClause = ",data_type_original_oid, data_type_original_mod ";
clientLogicFromClause = " LEFT JOIN pg_catalog.gs_encrypted_columns ce ON (a.attrelid=ce.rel_id AND a.attname = ce.column_name) ";
}
int numberOfFields = 23; // JDBC4 int numberOfFields = 23; // JDBC4
List<byte[][]> v = new ArrayList<byte[][]>(); // The new ResultSet tuple stuff List<byte[][]> v = new ArrayList<byte[][]>(); // The new ResultSet tuple stuff
Field[] f = new Field[numberOfFields]; // The field descriptors for the new ResultSet Field[] f = new Field[numberOfFields]; // The field descriptors for the new ResultSet
@ -1509,6 +1518,7 @@ public class PgDatabaseMetaData implements DatabaseMetaData {
sql += "null as attidentity,"; sql += "null as attidentity,";
} }
sql += "pg_catalog.pg_get_expr(def.adbin, def.adrelid) AS adsrc,dsc.description,t.typbasetype,t.typtype " sql += "pg_catalog.pg_get_expr(def.adbin, def.adrelid) AS adsrc,dsc.description,t.typbasetype,t.typtype "
+ clientLogicSelectClause
+ " FROM pg_catalog.pg_namespace n " + " FROM pg_catalog.pg_namespace n "
+ " JOIN pg_catalog.pg_class c ON (c.relnamespace = n.oid) " + " JOIN pg_catalog.pg_class c ON (c.relnamespace = n.oid) "
+ " JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid) " + " JOIN pg_catalog.pg_attribute a ON (a.attrelid=c.oid) "
@ -1517,6 +1527,7 @@ public class PgDatabaseMetaData implements DatabaseMetaData {
+ " LEFT JOIN pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid) " + " LEFT JOIN pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid) "
+ " LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND dc.relname='pg_class') " + " LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND dc.relname='pg_class') "
+ " LEFT JOIN pg_catalog.pg_namespace dn ON (dc.relnamespace=dn.oid AND dn.nspname='pg_catalog') " + " LEFT JOIN pg_catalog.pg_namespace dn ON (dc.relnamespace=dn.oid AND dn.nspname='pg_catalog') "
+ clientLogicFromClause
+ " WHERE c.relkind in ('r','p','v','f','m') and a.attnum > 0 AND NOT a.attisdropped "; + " WHERE c.relkind in ('r','p','v','f','m') and a.attnum > 0 AND NOT a.attisdropped ";
if (schemaPattern != null && !schemaPattern.isEmpty()) { if (schemaPattern != null && !schemaPattern.isEmpty()) {
@ -1538,7 +1549,19 @@ public class PgDatabaseMetaData implements DatabaseMetaData {
while (rs.next()) { while (rs.next()) {
byte[][] tuple = new byte[numberOfFields][]; byte[][] tuple = new byte[numberOfFields][];
int typeOid = (int) rs.getLong("atttypid"); int typeOid = (int) rs.getLong("atttypid");
int clientLogicOriginalType = 0;
if (isClientLogicOn) {
if (rs.getString("data_type_original_oid") != null){
clientLogicOriginalType = typeOid;
typeOid = (int)rs.getLong("data_type_original_oid");
}
}
int typeMod = rs.getInt("atttypmod"); int typeMod = rs.getInt("atttypmod");
if (isClientLogicOn) {
if (rs.getString("data_type_original_mod") != null){
typeMod = (int)rs.getLong("data_type_original_mod");
}
}
tuple[0] = null; // Catalog name, not supported tuple[0] = null; // Catalog name, not supported
tuple[1] = rs.getBytes("nspname"); // Schema tuple[1] = rs.getBytes("nspname"); // Schema
@ -1597,6 +1620,10 @@ public class PgDatabaseMetaData implements DatabaseMetaData {
tuple[11] = rs.getBytes("description"); // Description (if any) tuple[11] = rs.getBytes("description"); // Description (if any)
tuple[12] = rs.getBytes("adsrc"); // Column default tuple[12] = rs.getBytes("adsrc"); // Column default
tuple[13] = null; // sql data type (unused) tuple[13] = null; // sql data type (unused)
if (clientLogicOriginalType > 0) {
//Add in this unused value the client logic type id
tuple[13] = connection.encodeString(Integer.toString(clientLogicOriginalType));
}
tuple[14] = null; // sql datetime sub (unused) tuple[14] = null; // sql datetime sub (unused)
tuple[15] = tuple[6]; // char octet length tuple[15] = tuple[6]; // char octet length
tuple[16] = connection.encodeString(String.valueOf(rs.getInt("attnum"))); // ordinal position tuple[16] = connection.encodeString(String.valueOf(rs.getInt("attnum"))); // ordinal position

View File

@ -124,8 +124,15 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
public int executeUpdate() throws SQLException { public int executeUpdate() throws SQLException {
executeWithFlags(QueryExecutor.QUERY_NO_RESULTS); executeWithFlags(QueryExecutor.QUERY_NO_RESULTS);
checkNoResultUpdate();
return getUpdateCount();
}
return getNoResultUpdateCount(); @Override
public long executeLargeUpdate() throws SQLException {
executeWithFlags(QueryExecutor.QUERY_NO_RESULTS);
checkNoResultUpdate();
return getLargeUpdateCount();
} }
public boolean execute(String p_sql) throws SQLException { public boolean execute(String p_sql) throws SQLException {
@ -237,6 +244,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
byte[] val = new byte[2]; byte[] val = new byte[2];
ByteConverter.int2(val, 0, x); ByteConverter.int2(val, 0, x);
bindBytes(parameterIndex, val, Oid.INT2); bindBytes(parameterIndex, val, Oid.INT2);
preparedParameters.saveLiteralValueForClientLogic(parameterIndex, Integer.toString(x));
return; return;
} }
bindLiteral(parameterIndex, Integer.toString(x), Oid.INT2); bindLiteral(parameterIndex, Integer.toString(x), Oid.INT2);
@ -248,6 +256,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
byte[] val = new byte[2]; byte[] val = new byte[2];
ByteConverter.int2(val, 0, x); ByteConverter.int2(val, 0, x);
bindBytes(parameterIndex, val, Oid.INT2); bindBytes(parameterIndex, val, Oid.INT2);
preparedParameters.saveLiteralValueForClientLogic(parameterIndex, Integer.toString(x));
return; return;
} }
bindLiteral(parameterIndex, Integer.toString(x), Oid.INT2); bindLiteral(parameterIndex, Integer.toString(x), Oid.INT2);
@ -259,6 +268,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
byte[] val = new byte[4]; byte[] val = new byte[4];
ByteConverter.int4(val, 0, x); ByteConverter.int4(val, 0, x);
bindBytes(parameterIndex, val, Oid.INT4); bindBytes(parameterIndex, val, Oid.INT4);
preparedParameters.saveLiteralValueForClientLogic(parameterIndex, Integer.toString(x));
return; return;
} }
bindLiteral(parameterIndex, Integer.toString(x), Oid.INT4); bindLiteral(parameterIndex, Integer.toString(x), Oid.INT4);
@ -270,6 +280,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
byte[] val = new byte[8]; byte[] val = new byte[8];
ByteConverter.int8(val, 0, x); ByteConverter.int8(val, 0, x);
bindBytes(parameterIndex, val, Oid.INT8); bindBytes(parameterIndex, val, Oid.INT8);
preparedParameters.saveLiteralValueForClientLogic(parameterIndex, Long.toString(x));
return; return;
} }
bindLiteral(parameterIndex, Long.toString(x), Oid.INT8); bindLiteral(parameterIndex, Long.toString(x), Oid.INT8);
@ -281,6 +292,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
byte[] val = new byte[4]; byte[] val = new byte[4];
ByteConverter.float4(val, 0, x); ByteConverter.float4(val, 0, x);
bindBytes(parameterIndex, val, Oid.FLOAT4); bindBytes(parameterIndex, val, Oid.FLOAT4);
preparedParameters.saveLiteralValueForClientLogic(parameterIndex, Float.toString(x));
return; return;
} }
bindLiteral(parameterIndex, Float.toString(x), Oid.FLOAT8); bindLiteral(parameterIndex, Float.toString(x), Oid.FLOAT8);
@ -292,6 +304,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
byte[] val = new byte[8]; byte[] val = new byte[8];
ByteConverter.float8(val, 0, x); ByteConverter.float8(val, 0, x);
bindBytes(parameterIndex, val, Oid.FLOAT8); bindBytes(parameterIndex, val, Oid.FLOAT8);
preparedParameters.saveLiteralValueForClientLogic(parameterIndex, Double.toString(x));
return; return;
} }
bindLiteral(parameterIndex, Double.toString(x), Oid.FLOAT8); bindLiteral(parameterIndex, Double.toString(x), Oid.FLOAT8);
@ -467,6 +480,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
byte[] data = new byte[binObj.lengthInBytes()]; byte[] data = new byte[binObj.lengthInBytes()];
binObj.toBytes(data, 0); binObj.toBytes(data, 0);
bindBytes(parameterIndex, data, oid); bindBytes(parameterIndex, data, oid);
preparedParameters.saveLiteralValueForClientLogic(parameterIndex, x.getValue());
} else { } else {
setString(parameterIndex, x.getValue(), oid); setString(parameterIndex, x.getValue(), oid);
} }
@ -1105,6 +1119,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
PgArray arr = (PgArray) x; PgArray arr = (PgArray) x;
if (arr.isBinary()) { if (arr.isBinary()) {
bindBytes(i, arr.toBytes(), oid); bindBytes(i, arr.toBytes(), oid);
preparedParameters.saveLiteralValueForClientLogic(i, x.toString());
return; return;
} }
} }
@ -1386,7 +1401,7 @@ class PgPreparedStatement extends PgStatement implements PreparedStatement {
} }
public void setNString(int parameterIndex, String value) throws SQLException { public void setNString(int parameterIndex, String value) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNString(int, String)"); this.setString(parameterIndex, value);
} }
public void setNCharacterStream(int parameterIndex, Reader value, long length) public void setNCharacterStream(int parameterIndex, Reader value, long length)

View File

@ -149,6 +149,7 @@ public class PgResultSet implements ResultSet, org.postgresql.PGRefCursorResultS
this.statement = statement; this.statement = statement;
this.fields = fields; this.fields = fields;
this.rows = tuples; this.rows = tuples;
clientLogicGetData();
this.cursor = cursor; this.cursor = cursor;
this.maxRows = maxRows; this.maxRows = maxRows;
this.maxFieldSize = maxFieldSize; this.maxFieldSize = maxFieldSize;
@ -156,6 +157,76 @@ public class PgResultSet implements ResultSet, org.postgresql.PGRefCursorResultS
this.resultsetconcurrency = rsConcurrency; this.resultsetconcurrency = rsConcurrency;
} }
private static final char[] HEX_ARRAY = "0123456789ABCDEF".toCharArray();
/**
* helper function for clientLogicGetData when the client logic data received as binary
* @param bytes bytes
* @return hexadecimal presentation of the binary data
*/
public static String bytesArrayToHexString(byte[] bytes) {
char[] hexCharArray = new char[bytes.length * 2];
for (int i = 0; i < bytes.length; i++) {
int val = bytes[i] & 0xFF;
hexCharArray[i * 2] = HEX_ARRAY[val >>> 4];
hexCharArray[i * 2 + 1] = HEX_ARRAY[val & 0x0F];
}
return new String(hexCharArray);
}
/**
* This method is used to transform data that is client logic from client logic back to user input format
*/
private void clientLogicGetData() {
ClientLogic clientLogic = connection.getClientLogic();
// if client logic is off, no need to progress
if (clientLogic == null) {
return;
}
Encoding encoding = null;
try {
encoding = connection.getEncoding();
} catch (SQLException e1) {
// If we cannot get the encoding object, cannot move on.
connection.getLogger().error("client logic failed - could not get connection encoding");
return;
}
int fieldIndex = 0;
// Loop thru the list of fields
for (Field field : this.fields) {
if (ClientLogic.isClientLogicField(field.getOID())) {
for (int rowIndex = 0; rowIndex < rows.size(); ++rowIndex) {
try {
if (rows.get(rowIndex)[fieldIndex] != null) {
String clientLogicValue = "";
// The client logic fields may arrive as binary or as UTF-8.
// Need to find out what is it and act accordingly
if (field.getFormat() == Field.BINARY_FORMAT) {
clientLogicValue = "\\x" + bytesArrayToHexString(rows.get(rowIndex)[fieldIndex]);
} else {
clientLogicValue = encoding.decode(rows.get(rowIndex)[fieldIndex]);
}
String userInputValue = "";
try {
userInputValue = clientLogic.runClientLogic(clientLogicValue, field.getMod());
// Encode the data back the same way, so the field is now not binary
}
catch (ClientLogicException e) {
connection.getLogger().error("client logic failed for field:" + field.getColumnLabel() +
", value: " + clientLogicValue + " Error:" +
e.getErrorCode() + ":" + e.getErrorText());
}
rows.get(rowIndex)[fieldIndex] = encoding.encode(userInputValue);
}
}
catch (IOException e) {
connection.getLogger().error("client logic failed encoding on IOException for field:" + field.getColumnLabel());
}
}
}
++fieldIndex;
}
}
public java.net.URL getURL(int columnIndex) throws SQLException { public java.net.URL getURL(int columnIndex) throws SQLException {
connection.getLogger().trace("[" + connection.getSocketAddress() + "] " + " getURL columnIndex: " + columnIndex); connection.getLogger().trace("[" + connection.getSocketAddress() + "] " + " getURL columnIndex: " + columnIndex);
checkClosed(); checkClosed();
@ -1812,17 +1883,21 @@ public class PgResultSet implements ResultSet, org.postgresql.PGRefCursorResultS
public class CursorResultHandler extends ResultHandlerBase { public class CursorResultHandler extends ResultHandlerBase {
@Override
public void handleResultRows(Query fromQuery, Field[] fields, List<byte[][]> tuples, public void handleResultRows(Query fromQuery, Field[] fields, List<byte[][]> tuples,
ResultCursor cursor) { ResultCursor cursor) {
PgResultSet.this.rows = tuples; PgResultSet.this.rows = tuples;
PgResultSet.this.clientLogicGetData();//for client logic case, need to run pre-process
PgResultSet.this.cursor = cursor; PgResultSet.this.cursor = cursor;
} }
public void handleCommandStatus(String status, int updateCount, long insertOID) { @Override
public void handleCommandStatus(String status, long updateCount, long insertOID) {
handleError(new PSQLException(GT.tr("Unexpected command status: {0}.", status), handleError(new PSQLException(GT.tr("Unexpected command status: {0}.", status),
PSQLState.PROTOCOL_VIOLATION)); PSQLState.PROTOCOL_VIOLATION));
} }
@Override
public void handleCompletion() throws SQLException { public void handleCompletion() throws SQLException {
SQLWarning warning = getWarning(); SQLWarning warning = getWarning();
if (warning != null) { if (warning != null) {
@ -2854,6 +2929,10 @@ public class PgResultSet implements ResultSet, org.postgresql.PGRefCursorResultS
* @return True if the column is in binary format. * @return True if the column is in binary format.
*/ */
protected boolean isBinary(int column) { protected boolean isBinary(int column) {
ClientLogic clientLogic = connection.getClientLogic();
if (clientLogic != null && ClientLogic.isClientLogicField(fields[column - 1].getOID())) {
return false;//Even it is received as binary is it encoded in clientLogicGetData
}
return fields[column - 1].getFormat() == Field.BINARY_FORMAT; return fields[column - 1].getFormat() == Field.BINARY_FORMAT;
} }
@ -3621,8 +3700,7 @@ public class PgResultSet implements ResultSet, org.postgresql.PGRefCursorResultS
} }
public String getNString(int columnIndex) throws SQLException { public String getNString(int columnIndex) throws SQLException {
connection.getLogger().trace("[" + connection.getSocketAddress() + "] " + " getNString columnIndex: " + columnIndex); return this.getString(columnIndex);
throw org.postgresql.Driver.notImplemented(this.getClass(), "getNString(int)");
} }
public String getNString(String columnName) throws SQLException { public String getNString(String columnName) throws SQLException {

View File

@ -72,6 +72,9 @@ public class PgResultSetMetaData implements ResultSetMetaData, PGResultSetMetaDa
*/ */
public boolean isCaseSensitive(int column) throws SQLException { public boolean isCaseSensitive(int column) throws SQLException {
Field field = getField(column); Field field = getField(column);
if (isCleintLogicOn() && ClientLogic.isClientLogicField(field.getOID())) {
return connection.getTypeInfo().isCaseSensitive(field.getMod());
}
return connection.getTypeInfo().isCaseSensitive(field.getOID()); return connection.getTypeInfo().isCaseSensitive(field.getOID());
} }
@ -129,11 +132,19 @@ public class PgResultSetMetaData implements ResultSetMetaData, PGResultSetMetaDa
*/ */
public boolean isSigned(int column) throws SQLException { public boolean isSigned(int column) throws SQLException {
Field field = getField(column); Field field = getField(column);
if (isCleintLogicOn() && ClientLogic.isClientLogicField(field.getOID())) {
fetchFieldMetaData();
return connection.getTypeInfo().isSigned(field.getMod());
}
return connection.getTypeInfo().isSigned(field.getOID()); return connection.getTypeInfo().isSigned(field.getOID());
} }
public int getColumnDisplaySize(int column) throws SQLException { public int getColumnDisplaySize(int column) throws SQLException {
Field field = getField(column); Field field = getField(column);
if (isCleintLogicOn() && ClientLogic.isClientLogicField(field.getOID())) {
fetchFieldMetaData();
return connection.getTypeInfo().getDisplaySize(field.getMod(), field.getMetadata().clientLogicOriginalMod);
}
return connection.getTypeInfo().getDisplaySize(field.getOID(), field.getMod()); return connection.getTypeInfo().getDisplaySize(field.getOID(), field.getMod());
} }
@ -184,24 +195,42 @@ public class PgResultSetMetaData implements ResultSetMetaData, PGResultSetMetaDa
return allOk; return allOk;
} }
private StringBuilder getSql() { private Boolean isCleintLogicOn() {
StringBuilder sql = new StringBuilder( ClientLogic clientLogic = this.connection.getClientLogic();
"SELECT c.oid, a.attnum, a.attname, c.relname, n.nspname, " if (clientLogic == null) {
+ "a.attnotnull OR (t.typtype = 'd' AND t.typnotnull), "); return false;
}
if (connection.haveMinimumServerVersion(ServerVersion.v10)) { return true;
sql.append("a.attidentity != '' OR pg_catalog.pg_get_expr(d.adbin, d.adrelid) LIKE '%nextval(%' ");
} else {
sql.append("pg_catalog.pg_get_expr(d.adbin, d.adrelid) LIKE '%nextval(%' ");
}
sql.append( "FROM pg_catalog.pg_class c "
+ "JOIN pg_catalog.pg_namespace n ON (c.relnamespace = n.oid) "
+ "JOIN pg_catalog.pg_attribute a ON (c.oid = a.attrelid) "
+ "JOIN pg_catalog.pg_type t ON (a.atttypid = t.oid) "
+ "LEFT JOIN pg_catalog.pg_attrdef d ON (d.adrelid = a.attrelid AND d.adnum = a.attnum) "
+ "JOIN (");
return sql;
} }
private StringBuilder getSql() {
StringBuilder sql = new StringBuilder(
"SELECT c.oid, a.attnum, a.attname, c.relname, n.nspname, "
+ "a.attnotnull OR (t.typtype = 'd' AND t.typnotnull), ");
if (connection.haveMinimumServerVersion(ServerVersion.v10)) {
sql.append("a.attidentity != '' OR pg_catalog.pg_get_expr(d.adbin, d.adrelid) LIKE '%nextval(%' ");
} else {
sql.append("pg_catalog.pg_get_expr(d.adbin, d.adrelid) LIKE '%nextval(%' ");
}
if (isCleintLogicOn()) {
sql.append(",data_type_original_oid, data_type_original_mod ");
}
sql.append( "FROM pg_catalog.pg_class c "
+ "JOIN pg_catalog.pg_namespace n ON (c.relnamespace = n.oid) "
+ "JOIN pg_catalog.pg_attribute a ON (c.oid = a.attrelid) ");
if (isCleintLogicOn()) {
sql.append("LEFT JOIN pg_catalog.gs_encrypted_columns ce ON (a.attrelid=ce.rel_id AND a.attname = ce.column_name)");
}
sql.append("JOIN pg_catalog.pg_type t ON (a.atttypid = t.oid) "
+ "LEFT JOIN pg_catalog.pg_attrdef d ON (d.adrelid = a.attrelid AND d.adnum = a.attnum) "
+ "JOIN (");
return sql;
}
private void executeSql(StringBuilder sql) throws SQLException { private void executeSql(StringBuilder sql) throws SQLException {
Statement stmt = connection.createStatement(); Statement stmt = connection.createStatement();
ResultSet rs = null; ResultSet rs = null;
@ -215,11 +244,15 @@ public class PgResultSetMetaData implements ResultSetMetaData, PGResultSetMetaDa
String columnName = rs.getString(3); String columnName = rs.getString(3);
String tableName = rs.getString(4); String tableName = rs.getString(4);
String schemaName = rs.getString(5); String schemaName = rs.getString(5);
int clientLogicOriginalMod = 0;
int nullable = int nullable =
rs.getBoolean(6) ? ResultSetMetaData.columnNoNulls : ResultSetMetaData.columnNullable; rs.getBoolean(6) ? ResultSetMetaData.columnNoNulls : ResultSetMetaData.columnNullable;
boolean autoIncrement = rs.getBoolean(7); boolean autoIncrement = rs.getBoolean(7);
if (isCleintLogicOn()) {
clientLogicOriginalMod = rs.getInt(9);
}
FieldMetadata fieldMetadata = FieldMetadata fieldMetadata =
new FieldMetadata(columnName, tableName, schemaName, nullable, autoIncrement); new FieldMetadata(columnName, tableName, schemaName, nullable, autoIncrement, clientLogicOriginalMod);
FieldMetadata.Key key = new FieldMetadata.Key(table, column); FieldMetadata.Key key = new FieldMetadata.Key(table, column);
md.put(key, fieldMetadata); md.put(key, fieldMetadata);
} }
@ -292,11 +325,19 @@ public class PgResultSetMetaData implements ResultSetMetaData, PGResultSetMetaDa
public int getPrecision(int column) throws SQLException { public int getPrecision(int column) throws SQLException {
Field field = getField(column); Field field = getField(column);
if (isCleintLogicOn() && ClientLogic.isClientLogicField(field.getOID())) {
fetchFieldMetaData();
return connection.getTypeInfo().getPrecision(field.getMod(), field.getMetadata().clientLogicOriginalMod);
}
return connection.getTypeInfo().getPrecision(field.getOID(), field.getMod()); return connection.getTypeInfo().getPrecision(field.getOID(), field.getMod());
} }
public int getScale(int column) throws SQLException { public int getScale(int column) throws SQLException {
Field field = getField(column); Field field = getField(column);
if (isCleintLogicOn() && ClientLogic.isClientLogicField(field.getOID())) {
fetchFieldMetaData();
return connection.getTypeInfo().getScale(field.getMod(), field.getMetadata().clientLogicOriginalMod);
}
return connection.getTypeInfo().getScale(field.getOID(), field.getMod()); return connection.getTypeInfo().getScale(field.getOID(), field.getMod());
} }
@ -418,10 +459,16 @@ public class PgResultSetMetaData implements ResultSetMetaData, PGResultSetMetaDa
} }
protected String getPGType(int columnIndex) throws SQLException { protected String getPGType(int columnIndex) throws SQLException {
if (isCleintLogicOn() && ClientLogic.isClientLogicField(getField(columnIndex).getOID())) {
return connection.getTypeInfo().getPGType(getField(columnIndex).getMod());
}
return connection.getTypeInfo().getPGType(getField(columnIndex).getOID()); return connection.getTypeInfo().getPGType(getField(columnIndex).getOID());
} }
protected int getSQLType(int columnIndex) throws SQLException { protected int getSQLType(int columnIndex) throws SQLException {
if (isCleintLogicOn() && ClientLogic.isClientLogicField(getField(columnIndex).getOID())) {
return connection.getTypeInfo().getSQLType(getField(columnIndex).getMod());
}
return connection.getTypeInfo().getSQLType(getField(columnIndex).getOID()); return connection.getTypeInfo().getSQLType(getField(columnIndex).getOID());
} }
@ -432,7 +479,11 @@ public class PgResultSetMetaData implements ResultSetMetaData, PGResultSetMetaDa
public String getColumnClassName(int column) throws SQLException { public String getColumnClassName(int column) throws SQLException {
Field field = getField(column); Field field = getField(column);
String result = connection.getTypeInfo().getJavaClass(field.getOID()); int actualOID = field.getOID();
if (ClientLogic.isClientLogicField(actualOID)) {
actualOID = field.getMod();
}
String result = connection.getTypeInfo().getJavaClass(actualOID);
if (result != null) { if (result != null) {
return result; return result;

View File

@ -6,18 +6,9 @@
package org.postgresql.jdbc; package org.postgresql.jdbc;
import org.postgresql.Driver; import org.postgresql.Driver;
import org.postgresql.core.BaseConnection; import org.postgresql.core.*;
import org.postgresql.core.BaseStatement;
import org.postgresql.core.CachedQuery;
import org.postgresql.core.Field;
import org.postgresql.core.NoticeListener;
import org.postgresql.core.ParameterList;
import org.postgresql.core.Query;
import org.postgresql.core.QueryExecutor;
import org.postgresql.core.ResultCursor;
import org.postgresql.core.ResultHandlerBase;
import org.postgresql.core.SqlCommand;
import org.postgresql.util.GT; import org.postgresql.util.GT;
import org.postgresql.util.PGbytea;
import org.postgresql.util.PSQLException; import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState; import org.postgresql.util.PSQLState;
import org.postgresql.log.Logger; import org.postgresql.log.Logger;
@ -140,6 +131,11 @@ public class PgStatement implements Statement, BaseStatement {
protected int maxfieldSize = 0; protected int maxfieldSize = 0;
/**
* Used for client encryption to identify the statement name in libpq/clientlogic
*/
protected String statementName = "";
PgStatement(PgConnection c, int rsType, int rsConcurrency, int rsHoldability) PgStatement(PgConnection c, int rsType, int rsConcurrency, int rsHoldability)
throws SQLException { throws SQLException {
this.connection = c; this.connection = c;
@ -147,12 +143,16 @@ public class PgStatement implements Statement, BaseStatement {
resultsettype = rsType; resultsettype = rsType;
concurrency = rsConcurrency; concurrency = rsConcurrency;
setFetchSize(c.getDefaultFetchSize()); setFetchSize(c.getDefaultFetchSize());
if(c.getFetchSize() >= 0){ if(c.getFetchSize() >= 0){
this.fetchSize = c.getFetchSize(); this.fetchSize = c.getFetchSize();
} }
setPrepareThreshold(c.getPrepareThreshold()); setPrepareThreshold(c.getPrepareThreshold());
this.rsHoldability = rsHoldability; this.rsHoldability = rsHoldability;
if (c.getClientLogic() != null) {
this.statementName = c.getClientLogic().getStatementName();
}
} }
public ResultSet createResultSet(Query originalQuery, Field[] fields, List<byte[][]> tuples, public ResultSet createResultSet(Query originalQuery, Field[] fields, List<byte[][]> tuples,
@ -221,7 +221,7 @@ public class PgStatement implements Statement, BaseStatement {
} }
@Override @Override
public void handleCommandStatus(String status, int updateCount, long insertOID) { public void handleCommandStatus(String status, long updateCount, long insertOID) {
append(new ResultWrapper(updateCount, insertOID)); append(new ResultWrapper(updateCount, insertOID));
} }
@ -233,13 +233,32 @@ public class PgStatement implements Statement, BaseStatement {
} }
public java.sql.ResultSet executeQuery(String p_sql) throws SQLException { public java.sql.ResultSet executeQuery(String p_sql) throws SQLException {
ClientLogic clientLogic = this.connection.getClientLogic();
String exception = "No results were returned by the query.";
if (clientLogic != null) {
exception = "";
}
if (!executeWithFlags(p_sql, 0)) { if (!executeWithFlags(p_sql, 0)) {
throw new PSQLException(GT.tr("No results were returned by the query."), PSQLState.NO_DATA); throw new PSQLException(GT.tr(exception), PSQLState.NO_DATA);
} }
return getSingleResultSet(); return getSingleResultSet();
} }
/**
* This method was added for supporting fetchDataFromQuery method in the ClientLogicImpl class
* executes query and bypass client logic to get client logic data
* @param p_sql the query to run
* @return ResultSet with the data
* @throws SQLException
*/
java.sql.ResultSet executeQueryWithNoCL(String p_sql) throws SQLException {
if (!executeWithFlags(p_sql, QueryExecutor.QUERY_EXECUTE_BYPASS_CLIENT_LOGIC)) {
throw new PSQLException(GT.tr("No results were returned by the query."), PSQLState.NO_DATA);
}
return getSingleResultSet();
}
protected java.sql.ResultSet getSingleResultSet() throws SQLException { protected java.sql.ResultSet getSingleResultSet() throws SQLException {
synchronized (this) { synchronized (this) {
checkClosed(); checkClosed();
@ -254,10 +273,11 @@ public class PgStatement implements Statement, BaseStatement {
public int executeUpdate(String p_sql) throws SQLException { public int executeUpdate(String p_sql) throws SQLException {
executeWithFlags(p_sql, QueryExecutor.QUERY_NO_RESULTS); executeWithFlags(p_sql, QueryExecutor.QUERY_NO_RESULTS);
return getNoResultUpdateCount(); checkNoResultUpdate();
return getUpdateCount();
} }
protected int getNoResultUpdateCount() throws SQLException { protected final void checkNoResultUpdate() throws SQLException {
synchronized (this) { synchronized (this) {
checkClosed(); checkClosed();
ResultWrapper iter = result; ResultWrapper iter = result;
@ -270,7 +290,7 @@ public class PgStatement implements Statement, BaseStatement {
iter = iter.getNext(); iter = iter.getNext();
} }
return getUpdateCount();
} }
} }
@ -387,10 +407,74 @@ public class PgStatement implements Statement, BaseStatement {
} }
} }
/**
* Sets parameter value of prepared statement to client logic binary format
* @param queryParameters the query parameters
* @param parameterIndex the index to change
* @param x bytes data
* @param customOid the field oid
* @throws SQLException
*/
private void setClientLogicBytea(ParameterList queryParameters, int parameterIndex, byte[] x, int customOid) throws SQLException {
checkClosed();
if (null == x) {
queryParameters.setNull(parameterIndex, Oid.BYTEA);
return;
}
byte[] copy = new byte[x.length];
System.arraycopy(x, 0, copy, 0, x.length);
queryParameters.setClientLogicBytea(parameterIndex, copy, 0, x.length, customOid);
}
/**
* For prepared statements, replace client logic parameters from user input value to binary client logic value
* @param statementName statement name that was used by preQuery on libpq
* @param queryParameters the query parameters
* @throws SQLException
*/
private void replaceClientLogicParameters(String statementName, ParameterList queryParameters) throws SQLException {
if (queryParameters.getInParameterCount() > 0) {
ClientLogic clientLogic = this.connection.getClientLogic();
if (clientLogic != null) {
List<String> listParameterValuesBeeforeCL = new ArrayList<>();
//Getting the string values of all parameters
String[] arrParameterValuesBeforeCL = queryParameters.getLiteralValues();
if (arrParameterValuesBeforeCL != null){
for(int i = 0; i < queryParameters.getInParameterCount(); ++i) {
String valueBeforCL = arrParameterValuesBeforeCL[i];
if (valueBeforCL == null) {
valueBeforCL = "";
}
listParameterValuesBeeforeCL.add(valueBeforCL);
}
List<String> modifiedParameters;
try {
//Getting the client logic binary value from the back-end
modifiedParameters = clientLogic.replaceStatementParams(statementName, listParameterValuesBeeforeCL);
}
catch (ClientLogicException e) {
LOGGER.error("Errror: '" + e.getErrorText() + "' while running client logic to change parameters");
throw new SQLException("Errror: '" + e.getErrorText() + "' while running client logic to change parameters");
}
int indexParam = 1;
for (String mnodifiedParam: modifiedParameters) {
if (mnodifiedParam != null) {
//Convert the data to binary
byte[] dataInBytes = PGbytea.toBytes(mnodifiedParam.getBytes());
this.setClientLogicBytea(queryParameters, indexParam, dataInBytes, 4402);
}
++indexParam;
}
}
}
}
}
private void executeInternal(CachedQuery cachedQuery, ParameterList queryParameters, int flags) private void executeInternal(CachedQuery cachedQuery, ParameterList queryParameters, int flags)
throws SQLException { throws SQLException {
closeForNextExecution(); closeForNextExecution();
// Replace the query for client logic case
ClientLogic clientLogic = replaceQueryForClientLogic(cachedQuery, queryParameters, flags);
// Enable cursor-based resultset if possible. // Enable cursor-based resultset if possible.
if (fetchSize > 0 && !wantsScrollableResultSet() && !connection.getAutoCommit() if (fetchSize > 0 && !wantsScrollableResultSet() && !connection.getAutoCommit()
&& !wantsHoldableResultSet()) { && !wantsHoldableResultSet()) {
@ -402,7 +486,6 @@ public class PgStatement implements Statement, BaseStatement {
// If the no results flag is set (from executeUpdate) // If the no results flag is set (from executeUpdate)
// clear it so we get the generated keys results. // clear it so we get the generated keys results.
//
if ((flags & QueryExecutor.QUERY_NO_RESULTS) != 0) { if ((flags & QueryExecutor.QUERY_NO_RESULTS) != 0) {
flags &= ~(QueryExecutor.QUERY_NO_RESULTS); flags &= ~(QueryExecutor.QUERY_NO_RESULTS);
} }
@ -436,8 +519,13 @@ public class PgStatement implements Statement, BaseStatement {
// thus sending a describe request. // thus sending a describe request.
int flags2 = flags | QueryExecutor.QUERY_DESCRIBE_ONLY; int flags2 = flags | QueryExecutor.QUERY_DESCRIBE_ONLY;
StatementResultHandler handler2 = new StatementResultHandler(); StatementResultHandler handler2 = new StatementResultHandler();
connection.getQueryExecutor().execute(queryToExecute, queryParameters, handler2, 0, 0, if (clientLogic != null) {
flags2); runQueryExecutorForClientLogic(queryParameters, clientLogic, queryToExecute, flags2, handler2, 0, 0);
}
else {
connection.getQueryExecutor().execute(queryToExecute, queryParameters, handler2, 0, 0,
flags2);
}
ResultWrapper result2 = handler2.getResults(); ResultWrapper result2 = handler2.getResults();
if (result2 != null) { if (result2 != null) {
result2.getResultSet().close(); result2.getResultSet().close();
@ -448,13 +536,38 @@ public class PgStatement implements Statement, BaseStatement {
synchronized (this) { synchronized (this) {
result = null; result = null;
} }
runQueryExecutor(queryParameters, flags, clientLogic, queryToExecute, handler);
updateGeneratedKeyStatus(clientLogic, handler);
runQueryPostProcess(clientLogic);
}
private void runQueryPostProcess(ClientLogic clientLogic) {
try {
if (clientLogic != null && !(this instanceof PgPreparedStatement)) {
clientLogic.runQueryPostProcess();
}
}
catch(ClientLogicException e) {
LOGGER.error("Failed running runQueryPostProcess Error: " + e.getErrorCode() + ":" + e.getErrorText());
}
}
private void runQueryExecutor(ParameterList queryParameters, int flags, ClientLogic clientLogic, Query queryToExecute, org.postgresql.jdbc.PgStatement.StatementResultHandler handler) throws SQLException {
try { try {
startTimer(); startTimer();
connection.getQueryExecutor().execute(queryToExecute, queryParameters, handler, maxrows, if (clientLogic != null) {
fetchSize, flags); runQueryExecutorForClientLogic(queryParameters, clientLogic, queryToExecute, flags, handler, maxrows, fetchSize);
}
else {
connection.getQueryExecutor().execute(queryToExecute, queryParameters, handler, maxrows,
fetchSize, flags);
}
} finally { } finally {
killTimerTask(); killTimerTask();
} }
}
private void updateGeneratedKeyStatus(ClientLogic clientLogic, org.postgresql.jdbc.PgStatement.StatementResultHandler handler) throws SQLException {
synchronized (this) { synchronized (this) {
checkClosed(); checkClosed();
result = firstUnclosedResult = handler.getResults(); result = firstUnclosedResult = handler.getResults();
@ -470,6 +583,55 @@ public class PgStatement implements Statement, BaseStatement {
} }
} }
private void runQueryExecutorForClientLogic(ParameterList queryParameters, ClientLogic clientLogic, Query queryToExecute, int flags2, org.postgresql.jdbc.PgStatement.StatementResultHandler handler2, int i, int i2) throws SQLException {
try {
connection.getQueryExecutor().execute(queryToExecute, queryParameters, handler2, i, i2,
flags2);
} catch (SQLException sqlException) {
//Client logic should be able to change the error message back to use user input
String updatedMessage = clientLogic.clientLogicMessage(sqlException.getMessage());
throw new SQLException(updatedMessage);
}
}
private ClientLogic replaceQueryForClientLogic(CachedQuery cachedQuery, ParameterList queryParameters, int flags) throws SQLException {
ClientLogic clientLogic = null;
if ((flags & QueryExecutor.QUERY_EXECUTE_BYPASS_CLIENT_LOGIC) == 0) {
clientLogic = this.connection.getClientLogic();
}
if (clientLogic != null) {
// we use subqueires to check multiple statements scenario return null if query is simpleQuery
if (cachedQuery.query.getSubqueries() != null) {
LOGGER.error("multiple statements is not allowed under client logic routine.");
throw new SQLException("multiple statements is not allowed under client logic routine, please split it up into simpleQuery per statement.");
}
String modifiedQuery = cachedQuery.query.getNativeSql();
try {
if (this instanceof PgPreparedStatement) {
modifiedQuery = clientLogic.prepareQuery(modifiedQuery, statementName);
replaceClientLogicParameters(statementName, queryParameters);
}
else {
modifiedQuery = clientLogic.runQueryPreProcess(cachedQuery.query.getNativeSql());
}
cachedQuery.query.replaceNativeSqlForClientLogic(modifiedQuery);
}
catch(ClientLogicException e) {
if (e.isParsingError()) {
/*
* we should not block bad queries to be sent to the server
* PgConnection.isValid is based on error that is not parsed correctly
*/
LOGGER.debug("pre query failed for parsing error, moving on");
} else {
LOGGER.debug("Failed running runQueryPreProcess on executeInternal " + e.getErrorCode() + ":" + e.getErrorText());
throw new SQLException(e.getErrorText());
}
}
}
return clientLogic;
}
public void setCursorName(String name) throws SQLException { public void setCursorName(String name) throws SQLException {
checkClosed(); checkClosed();
// No-op. // No-op.
@ -477,6 +639,7 @@ public class PgStatement implements Statement, BaseStatement {
private volatile boolean isClosed = false; private volatile boolean isClosed = false;
@Override
public int getUpdateCount() throws SQLException { public int getUpdateCount() throws SQLException {
synchronized (this) { synchronized (this) {
checkClosed(); checkClosed();
@ -484,7 +647,8 @@ public class PgStatement implements Statement, BaseStatement {
return -1; return -1;
} }
return result.getUpdateCount(); long count = result.getUpdateCount();
return count > Integer.MAX_VALUE ? Statement.SUCCESS_NO_INFO : (int) count;
} }
} }
@ -750,26 +914,18 @@ public class PgStatement implements Statement, BaseStatement {
wantsGeneratedKeysAlways); wantsGeneratedKeysAlways);
} }
public int[] executeBatch() throws SQLException { private BatchResultHandler internalExecuteBatch() throws SQLException {
checkClosed();
closeForNextExecution();
if (batchStatements == null || batchStatements.isEmpty()) {
return new int[0];
}
// Construct query/parameter arrays. // Construct query/parameter arrays.
transformQueriesAndParameters(); transformQueriesAndParameters();
// Empty arrays should be passed to toArray // Empty arrays should be passed to toArray
// see http://shipilev.net/blog/2016/arrays-wisdom-ancients/ // see http://shipilev.net/blog/2016/arrays-wisdom-ancients/
Query[] queries = batchStatements.toArray(new Query[0]); Query[] queries = batchStatements.toArray(new Query[0]);
ParameterList[] parameterLists = ParameterList[] parameterLists = batchParameters.toArray(new ParameterList[0]);
batchParameters.toArray(new ParameterList[0]);
batchStatements.clear(); batchStatements.clear();
batchParameters.clear(); batchParameters.clear();
int flags = 0; int flags;
// Force a Describe before any execution? We need to do this if we're going // Force a Describe before any execution? We need to do this if we're going
// to send anything dependent on the Describe results, e.g. binary parameters. // to send anything dependent on the Describe results, e.g. binary parameters.
@ -831,6 +987,9 @@ public class PgStatement implements Statement, BaseStatement {
flags |= QueryExecutor.QUERY_SUPPRESS_BEGIN; flags |= QueryExecutor.QUERY_SUPPRESS_BEGIN;
} }
//Client logic case - handle the queries
handleQueriesForClientLogic(queries, parameterLists);
BatchResultHandler handler; BatchResultHandler handler;
handler = createBatchHandler(queries, parameterLists); handler = createBatchHandler(queries, parameterLists);
@ -890,7 +1049,46 @@ public class PgStatement implements Statement, BaseStatement {
} }
} }
return handler.getUpdateCount(); return handler;
}
private void handleQueriesForClientLogic(Query[] queries, ParameterList[] parameterLists) throws SQLException {
ClientLogic clientLogic = this.connection.getClientLogic();
if (clientLogic != null) {
for (int queriesCounter = 0; queriesCounter < queries.length; ++queriesCounter) {
String modifiedQuery = queries[queriesCounter].getNativeSql();
try {
if (this instanceof PgPreparedStatement) {
clientLogic.prepareQuery(modifiedQuery, statementName);
if (parameterLists != null && parameterLists.length > queriesCounter) {
if (parameterLists[queriesCounter] != null) {
replaceClientLogicParameters(statementName, parameterLists[queriesCounter]);
}
}
}
else {
modifiedQuery = clientLogic.runQueryPreProcess(modifiedQuery);
queries[queriesCounter].replaceNativeSqlForClientLogic(modifiedQuery);
}
}
catch(ClientLogicException e) {
LOGGER.debug("Failed running runQueryPreProcess on executeBatch " + e.getErrorCode() + ":" + e.getErrorText());
throw new SQLException(e.getErrorText());
}
}
}
}
public int[] executeBatch() throws SQLException {
checkClosed();
closeForNextExecution();
if (batchStatements == null || batchStatements.isEmpty()) {
return new int[0];
}
return internalExecuteBatch().getUpdateCount();
} }
public boolean checkParameterList(ParameterList[] paramlist){ public boolean checkParameterList(ParameterList[] paramlist){
@ -1049,8 +1247,16 @@ public class PgStatement implements Statement, BaseStatement {
return forceBinaryTransfers; return forceBinaryTransfers;
} }
@Override
public long getLargeUpdateCount() throws SQLException { public long getLargeUpdateCount() throws SQLException {
throw Driver.notImplemented(this.getClass(), "getLargeUpdateCount"); synchronized (this) {
checkClosed();
if (result == null || result.getResultSet() != null) {
return -1;
}
return result.getUpdateCount();
}
} }
public void setLargeMaxRows(long max) throws SQLException { public void setLargeMaxRows(long max) throws SQLException {
@ -1061,28 +1267,54 @@ public class PgStatement implements Statement, BaseStatement {
throw Driver.notImplemented(this.getClass(), "getLargeMaxRows"); throw Driver.notImplemented(this.getClass(), "getLargeMaxRows");
} }
@Override
public long[] executeLargeBatch() throws SQLException { public long[] executeLargeBatch() throws SQLException {
throw Driver.notImplemented(this.getClass(), "executeLargeBatch"); checkClosed();
closeForNextExecution();
if (batchStatements == null || batchStatements.isEmpty()) {
return new long[0];
}
return internalExecuteBatch().getLargeUpdateCount();
} }
@Override
public long executeLargeUpdate(String sql) throws SQLException { public long executeLargeUpdate(String sql) throws SQLException {
throw Driver.notImplemented(this.getClass(), "executeLargeUpdate"); executeWithFlags(sql, QueryExecutor.QUERY_NO_RESULTS);
checkNoResultUpdate();
return getLargeUpdateCount();
} }
@Override
public long executeLargeUpdate(String sql, int autoGeneratedKeys) throws SQLException { public long executeLargeUpdate(String sql, int autoGeneratedKeys) throws SQLException {
throw Driver.notImplemented(this.getClass(), "executeLargeUpdate"); if (autoGeneratedKeys == Statement.NO_GENERATED_KEYS) {
return executeLargeUpdate(sql);
}
return executeLargeUpdate(sql, (String[]) null);
} }
@Override
public long executeLargeUpdate(String sql, int[] columnIndexes) throws SQLException { public long executeLargeUpdate(String sql, int[] columnIndexes) throws SQLException {
throw Driver.notImplemented(this.getClass(), "executeLargeUpdate"); if (columnIndexes == null || columnIndexes.length == 0) {
return executeLargeUpdate(sql);
}
throw new PSQLException(GT.tr("Returning autogenerated keys by column index is not supported."),
PSQLState.NOT_IMPLEMENTED);
} }
@Override
public long executeLargeUpdate(String sql, String[] columnNames) throws SQLException { public long executeLargeUpdate(String sql, String[] columnNames) throws SQLException {
throw Driver.notImplemented(this.getClass(), "executeLargeUpdate"); if (columnNames != null && columnNames.length == 0) {
} return executeLargeUpdate(sql);
}
public long executeLargeUpdate() throws SQLException { wantsGeneratedKeysOnce = true;
throw Driver.notImplemented(this.getClass(), "executeLargeUpdate"); executeCachedSql(sql, 0, columnNames);
return getLargeUpdateCount();
} }
public boolean isClosed() throws SQLException { public boolean isClosed() throws SQLException {

View File

@ -21,7 +21,7 @@ public class ResultWrapper {
this.insertOID = -1; this.insertOID = -1;
} }
public ResultWrapper(int updateCount, long insertOID) { public ResultWrapper(long updateCount, long insertOID) {
this.rs = null; this.rs = null;
this.updateCount = updateCount; this.updateCount = updateCount;
this.insertOID = insertOID; this.insertOID = insertOID;
@ -31,7 +31,7 @@ public class ResultWrapper {
return rs; return rs;
} }
public int getUpdateCount() { public long getUpdateCount() {
return updateCount; return updateCount;
} }
@ -53,7 +53,7 @@ public class ResultWrapper {
} }
private final ResultSet rs; private final ResultSet rs;
private final int updateCount; private final long updateCount;
private final long insertOID; private final long insertOID;
private ResultWrapper next; private ResultWrapper next;
} }

View File

@ -266,7 +266,7 @@ public class TypeInfoCache implements TypeInfo {
ResultSet rs = oidStatement.getResultSet(); ResultSet rs = oidStatement.getResultSet();
if (rs.next()) { if (rs.next()) {
oid = (int) rs.getLong(1); oid = (int) rs.getLong(1);
String internalName = pgTypeName.toLowerCase(); String internalName = rs.getString(2);
_oidToPgName.put(oid, internalName); _oidToPgName.put(oid, internalName);
_pgNameToOid.put(internalName, oid); _pgNameToOid.put(internalName, oid);
} }

View File

@ -1,7 +1,7 @@
package org.postgresql.log; package org.postgresql.log;
import org.slf4j.Logger; import com.huawei.shade.org.slf4j.Logger;
import org.slf4j.LoggerFactory; import com.huawei.shade.org.slf4j.LoggerFactory;
public class Slf4JLogger implements Log { public class Slf4JLogger implements Log {
private Logger logger; private Logger logger;

View File

@ -0,0 +1,33 @@
package org.postgresql.util;
import java.util.Arrays;
public class ClusterSpec implements Comparable {
protected final HostSpec[] hostSpecs;
public ClusterSpec(HostSpec[] hostSpecs) {
this.hostSpecs = hostSpecs.clone();
}
public HostSpec[] getHostSpecs() {
return hostSpecs.clone();
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ClusterSpec that = (ClusterSpec) o;
return Arrays.equals(hostSpecs, that.hostSpecs);
}
@Override
public int hashCode() {
return Arrays.hashCode(hostSpecs);
}
@Override
public int compareTo(Object o) {
return this.toString().compareTo(o.toString());
}
}

View File

@ -442,6 +442,26 @@ public class TestUtil {
} }
} }
/*
* Helper - creates a unlogged table for use by a test.
* Unlogged tables works from PostgreSQL 9.1+
*/
public static void createUnloggedTable(Connection con, String table, String columns)
throws SQLException {
Statement st = con.createStatement();
try {
// Drop the table
dropTable(con, table);
String unlogged = haveMinimumServerVersion(con, ServerVersion.v9_1) ? "UNLOGGED" : "";
// Now create the table
st.executeUpdate("CREATE " + unlogged + " TABLE " + table + " (" + columns + ")");
} finally {
closeQuietly(st);
}
}
/** /**
* Helper creates an enum type. * Helper creates an enum type.
* *

View File

@ -366,9 +366,9 @@ public class MultiHostsConnectionTest {
@Test @Test
public void testHostRechecks() throws SQLException, InterruptedException { public void testHostRechecks() throws SQLException, InterruptedException {
GlobalHostStatusTracker.reportHostStatus(hostSpec(master1), Secondary); GlobalHostStatusTracker.reportHostStatus(hostSpec(master1), Secondary, new Properties());
GlobalHostStatusTracker.reportHostStatus(hostSpec(secondary1), Master); GlobalHostStatusTracker.reportHostStatus(hostSpec(secondary1), Master, new Properties());
GlobalHostStatusTracker.reportHostStatus(hostSpec(fake1), Secondary); GlobalHostStatusTracker.reportHostStatus(hostSpec(fake1), Secondary, new Properties());
try { try {
getConnection(master, false, fake1, secondary1, master1); getConnection(master, false, fake1, secondary1, master1);
@ -376,9 +376,9 @@ public class MultiHostsConnectionTest {
} catch (SQLException ex) { } catch (SQLException ex) {
} }
GlobalHostStatusTracker.reportHostStatus(hostSpec(master1), Secondary); GlobalHostStatusTracker.reportHostStatus(hostSpec(master1), Secondary, new Properties());
GlobalHostStatusTracker.reportHostStatus(hostSpec(secondary1), Master); GlobalHostStatusTracker.reportHostStatus(hostSpec(secondary1), Master, new Properties());
GlobalHostStatusTracker.reportHostStatus(hostSpec(fake1), Secondary); GlobalHostStatusTracker.reportHostStatus(hostSpec(fake1), Secondary, new Properties());
SECONDS.sleep(3); SECONDS.sleep(3);
@ -388,9 +388,9 @@ public class MultiHostsConnectionTest {
@Test @Test
public void testNoGoodHostsRechecksEverything() throws SQLException, InterruptedException { public void testNoGoodHostsRechecksEverything() throws SQLException, InterruptedException {
GlobalHostStatusTracker.reportHostStatus(hostSpec(master1), Secondary); GlobalHostStatusTracker.reportHostStatus(hostSpec(master1), Secondary, new Properties());
GlobalHostStatusTracker.reportHostStatus(hostSpec(secondary1), Secondary); GlobalHostStatusTracker.reportHostStatus(hostSpec(secondary1), Secondary, new Properties());
GlobalHostStatusTracker.reportHostStatus(hostSpec(fake1), Secondary); GlobalHostStatusTracker.reportHostStatus(hostSpec(fake1), Secondary, new Properties());
getConnection(master, false, secondary1, fake1, master1); getConnection(master, false, secondary1, fake1, master1);
assertRemote(masterIp); assertRemote(masterIp);

View File

@ -17,6 +17,7 @@ import org.junit.runners.Suite.SuiteClasses;
PreparedStatementTest.class, PreparedStatementTest.class,
Jdbc42CallableStatementTest.class, Jdbc42CallableStatementTest.class,
GetObject310InfinityTests.class, GetObject310InfinityTests.class,
LargeCountJdbc42Test.class,
SetObject310Test.class}) SetObject310Test.class})
public class Jdbc42TestSuite { public class Jdbc42TestSuite {

View File

@ -0,0 +1,412 @@
/*
* Copyright (c) 2018, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.test.jdbc42;
import org.postgresql.PGProperty;
import org.postgresql.test.TestUtil;
import org.postgresql.test.jdbc2.BaseTest4;
import org.postgresql.util.PSQLState;
import org.junit.Assert;
import org.junit.Ignore;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Properties;
/**
* Test methods with small counts that return long and failure scenarios. This have two really big
* and slow test, they are ignored for CI but can be tested locally to check that it works.
*/
@RunWith(Parameterized.class)
public class LargeCountJdbc42Test extends BaseTest4 {
private final boolean insertRewrite;
public LargeCountJdbc42Test(BinaryMode binaryMode, boolean insertRewrite) {
this.insertRewrite = insertRewrite;
setBinaryMode(binaryMode);
}
@Parameterized.Parameters(name = "binary = {0}, insertRewrite = {1}")
public static Iterable<Object[]> data() {
Collection<Object[]> ids = new ArrayList<>();
for (BinaryMode binaryMode : BinaryMode.values()) {
for (boolean insertRewrite : new boolean[]{false, true}) {
ids.add(new Object[]{binaryMode, insertRewrite});
}
}
return ids;
}
@Override
protected void updateProperties(Properties props) {
super.updateProperties(props);
PGProperty.REWRITE_BATCHED_INSERTS.set(props, insertRewrite);
}
@Override
public void setUp() throws Exception {
super.setUp();
TestUtil.createUnloggedTable(con, "largetable", "a boolean");
}
@Override
public void tearDown() throws SQLException {
TestUtil.dropTable(con, "largetable");
TestUtil.closeDB(con);
}
// ********************* EXECUTE LARGE UPDATES *********************
// FINEST: simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@38cccef, maxRows=0, fetchSize=0, flags=21
// FINEST: FE=> Parse(stmt=null,query="insert into largetable select true from generate_series($1, $2)",oids={20,20})
// FINEST: FE=> Bind(stmt=null,portal=null,$1=<1>,$2=<2147483757>)
// FINEST: FE=> Describe(portal=null)
// FINEST: FE=> Execute(portal=null,limit=1)
// FINEST: FE=> Sync
// FINEST: <=BE ParseComplete [null]
// FINEST: <=BE BindComplete [unnamed]
// FINEST: <=BE NoData
// FINEST: <=BE CommandStatus(INSERT 0 2147483757)
// FINEST: <=BE ReadyForQuery(I)
// FINEST: simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@5679c6c6, maxRows=0, fetchSize=0, flags=21
// FINEST: FE=> Parse(stmt=null,query="delete from largetable",oids={})
// FINEST: FE=> Bind(stmt=null,portal=null)
// FINEST: FE=> Describe(portal=null)
// FINEST: FE=> Execute(portal=null,limit=1)
// FINEST: FE=> Sync
// FINEST: <=BE ParseComplete [null]
// FINEST: <=BE BindComplete [unnamed]
// FINEST: <=BE NoData
// FINEST: <=BE CommandStatus(DELETE 2147483757)
/*
* Test PreparedStatement.executeLargeUpdate() and Statement.executeLargeUpdate(String sql)
*/
@Ignore("This is the big and SLOW test")
@Test
public void testExecuteLargeUpdateBIG() throws Exception {
long expected = Integer.MAX_VALUE + 110L;
con.setAutoCommit(false);
// Test PreparedStatement.executeLargeUpdate()
try (PreparedStatement stmt = con.prepareStatement("insert into largetable "
+ "select true from generate_series(?, ?)")) {
stmt.setLong(1, 1);
stmt.setLong(2, 2_147_483_757L); // Integer.MAX_VALUE + 110L
long count = stmt.executeLargeUpdate();
Assert.assertEquals("PreparedStatement 110 rows more than Integer.MAX_VALUE", expected, count);
}
// Test Statement.executeLargeUpdate(String sql)
try (Statement stmt = con.createStatement()) {
long count = stmt.executeLargeUpdate("delete from largetable");
Assert.assertEquals("Statement 110 rows more than Integer.MAX_VALUE", expected, count);
}
con.setAutoCommit(true);
}
/*
* Test Statement.executeLargeUpdate(String sql)
*/
@Test
public void testExecuteLargeUpdateStatementSMALL() throws Exception {
try (Statement stmt = con.createStatement()) {
long count = stmt.executeLargeUpdate("insert into largetable "
+ "select true from generate_series(1, 1010)");
long expected = 1010L;
Assert.assertEquals("Small long return 1010L", expected, count);
}
}
/*
* Test PreparedStatement.executeLargeUpdate();
*/
@Test
public void testExecuteLargeUpdatePreparedStatementSMALL() throws Exception {
try (PreparedStatement stmt = con.prepareStatement("insert into largetable "
+ "select true from generate_series(?, ?)")) {
stmt.setLong(1, 1);
stmt.setLong(2, 1010L);
long count = stmt.executeLargeUpdate();
long expected = 1010L;
Assert.assertEquals("Small long return 1010L", expected, count);
}
}
/*
* Test Statement.getLargeUpdateCount();
*/
@Test
public void testGetLargeUpdateCountStatementSMALL() throws Exception {
try (Statement stmt = con.createStatement()) {
boolean isResult = stmt.execute("insert into largetable "
+ "select true from generate_series(1, 1010)");
Assert.assertFalse("False if it is an update count or there are no results", isResult);
long count = stmt.getLargeUpdateCount();
long expected = 1010L;
Assert.assertEquals("Small long return 1010L", expected, count);
}
}
/*
* Test PreparedStatement.getLargeUpdateCount();
*/
@Test
public void testGetLargeUpdateCountPreparedStatementSMALL() throws Exception {
try (PreparedStatement stmt = con.prepareStatement("insert into largetable "
+ "select true from generate_series(?, ?)")) {
stmt.setInt(1, 1);
stmt.setInt(2, 1010);
boolean isResult = stmt.execute();
Assert.assertFalse("False if it is an update count or there are no results", isResult);
long count = stmt.getLargeUpdateCount();
long expected = 1010L;
Assert.assertEquals("Small long return 1010L", expected, count);
}
}
/*
* Test fail SELECT Statement.executeLargeUpdate(String sql)
*/
@Test
public void testExecuteLargeUpdateStatementSELECT() throws Exception {
try (Statement stmt = con.createStatement()) {
long count = stmt.executeLargeUpdate("select true from generate_series(1, 5)");
Assert.fail("A result was returned when none was expected. Returned: " + count);
} catch (SQLException e) {
Assert.assertEquals(PSQLState.TOO_MANY_RESULTS.getState(), e.getSQLState());
}
}
/*
* Test fail SELECT PreparedStatement.executeLargeUpdate();
*/
@Test
public void testExecuteLargeUpdatePreparedStatementSELECT() throws Exception {
try (PreparedStatement stmt = con.prepareStatement("select true from generate_series(?, ?)")) {
stmt.setLong(1, 1);
stmt.setLong(2, 5L);
long count = stmt.executeLargeUpdate();
Assert.fail("A result was returned when none was expected. Returned: " + count);
} catch (SQLException e) {
Assert.assertEquals(PSQLState.TOO_MANY_RESULTS.getState(), e.getSQLState());
}
}
/*
* Test Statement.getLargeUpdateCount();
*/
@Test
public void testGetLargeUpdateCountStatementSELECT() throws Exception {
try (Statement stmt = con.createStatement()) {
boolean isResult = stmt.execute("select true from generate_series(1, 5)");
Assert.assertTrue("True since this is a SELECT", isResult);
long count = stmt.getLargeUpdateCount();
long expected = -1L;
Assert.assertEquals("-1 if the current result is a ResultSet object", expected, count);
}
}
/*
* Test PreparedStatement.getLargeUpdateCount();
*/
@Test
public void testGetLargeUpdateCountPreparedStatementSELECT() throws Exception {
try (PreparedStatement stmt = con.prepareStatement("select true from generate_series(?, ?)")) {
stmt.setLong(1, 1);
stmt.setLong(2, 5L);
boolean isResult = stmt.execute();
Assert.assertTrue("True since this is a SELECT", isResult);
long count = stmt.getLargeUpdateCount();
long expected = -1L;
Assert.assertEquals("-1 if the current result is a ResultSet object", expected, count);
}
}
// ********************* BATCH LARGE UPDATES *********************
// FINEST: batch execute 3 queries, handler=org.postgresql.jdbc.BatchResultHandler@3d04a311, maxRows=0, fetchSize=0, flags=21
// FINEST: FE=> Parse(stmt=null,query="insert into largetable select true from generate_series($1, $2)",oids={23,23})
// FINEST: FE=> Bind(stmt=null,portal=null,$1=<1>,$2=<200>)
// FINEST: FE=> Describe(portal=null)
// FINEST: FE=> Execute(portal=null,limit=1)
// FINEST: FE=> Parse(stmt=null,query="insert into largetable select true from generate_series($1, $2)",oids={23,20})
// FINEST: FE=> Bind(stmt=null,portal=null,$1=<1>,$2=<3000000000>)
// FINEST: FE=> Describe(portal=null)
// FINEST: FE=> Execute(portal=null,limit=1)
// FINEST: FE=> Parse(stmt=null,query="insert into largetable select true from generate_series($1, $2)",oids={23,23})
// FINEST: FE=> Bind(stmt=null,portal=null,$1=<1>,$2=<50>)
// FINEST: FE=> Describe(portal=null)
// FINEST: FE=> Execute(portal=null,limit=1)
// FINEST: FE=> Sync
// FINEST: <=BE ParseComplete [null]
// FINEST: <=BE BindComplete [unnamed]
// FINEST: <=BE NoData
// FINEST: <=BE CommandStatus(INSERT 0 200)
// FINEST: <=BE ParseComplete [null]
// FINEST: <=BE BindComplete [unnamed]
// FINEST: <=BE NoData
// FINEST: <=BE CommandStatus(INSERT 0 3000000000)
// FINEST: <=BE ParseComplete [null]
// FINEST: <=BE BindComplete [unnamed]
// FINEST: <=BE NoData
// FINEST: <=BE CommandStatus(INSERT 0 50)
/*
* Test simple PreparedStatement.executeLargeBatch();
*/
@Ignore("This is the big and SLOW test")
@Test
public void testExecuteLargeBatchStatementBIG() throws Exception {
con.setAutoCommit(false);
try (PreparedStatement stmt = con.prepareStatement("insert into largetable "
+ "select true from generate_series(?, ?)")) {
stmt.setInt(1, 1);
stmt.setInt(2, 200);
stmt.addBatch(); // statement one
stmt.setInt(1, 1);
stmt.setLong(2, 3_000_000_000L);
stmt.addBatch(); // statement two
stmt.setInt(1, 1);
stmt.setInt(2, 50);
stmt.addBatch(); // statement three
long[] actual = stmt.executeLargeBatch();
Assert.assertArrayEquals("Large rows inserted via 3 batch", new long[]{200L, 3_000_000_000L, 50L}, actual);
}
con.setAutoCommit(true);
}
/*
* Test simple Statement.executeLargeBatch();
*/
@Test
public void testExecuteLargeBatchStatementSMALL() throws Exception {
try (Statement stmt = con.createStatement()) {
stmt.addBatch("insert into largetable(a) select true"); // statement one
stmt.addBatch("insert into largetable select false"); // statement two
stmt.addBatch("insert into largetable(a) values(true)"); // statement three
stmt.addBatch("insert into largetable values(false)"); // statement four
long[] actual = stmt.executeLargeBatch();
Assert.assertArrayEquals("Rows inserted via 4 batch", new long[]{1L, 1L, 1L, 1L}, actual);
}
}
/*
* Test simple PreparedStatement.executeLargeBatch();
*/
@Test
public void testExecuteLargePreparedStatementStatementSMALL() throws Exception {
try (PreparedStatement stmt = con.prepareStatement("insert into largetable "
+ "select true from generate_series(?, ?)")) {
stmt.setInt(1, 1);
stmt.setInt(2, 200);
stmt.addBatch(); // statement one
stmt.setInt(1, 1);
stmt.setInt(2, 100);
stmt.addBatch(); // statement two
stmt.setInt(1, 1);
stmt.setInt(2, 50);
stmt.addBatch(); // statement three
stmt.addBatch(); // statement four, same parms as three
long[] actual = stmt.executeLargeBatch();
Assert.assertArrayEquals("Rows inserted via 4 batch", new long[]{200L, 100L, 50L, 50L}, actual);
}
}
/*
* Test loop PreparedStatement.executeLargeBatch();
*/
@Test
public void testExecuteLargePreparedStatementStatementLoopSMALL() throws Exception {
long[] loop = {200, 100, 50, 300, 20, 60, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096};
try (PreparedStatement stmt = con.prepareStatement("insert into largetable "
+ "select true from generate_series(?, ?)")) {
for (long i : loop) {
stmt.setInt(1, 1);
stmt.setLong(2, i);
stmt.addBatch();
}
long[] actual = stmt.executeLargeBatch();
Assert.assertArrayEquals("Rows inserted via batch", loop, actual);
}
}
/*
* Test loop PreparedStatement.executeLargeBatch();
*/
@Test
public void testExecuteLargeBatchValuesInsertSMALL() throws Exception {
boolean[] loop = {true, false, true, false, false, false, true, true, true, true, false, true};
try (PreparedStatement stmt = con.prepareStatement("insert into largetable values(?)")) {
for (boolean i : loop) {
stmt.setBoolean(1, i);
stmt.addBatch();
}
long[] actual = stmt.executeLargeBatch();
Assert.assertEquals("Rows inserted via batch", loop.length, actual.length);
for (long i : actual) {
if (insertRewrite) {
Assert.assertEquals(Statement.SUCCESS_NO_INFO, i);
} else {
Assert.assertEquals(1, i);
}
}
}
}
/*
* Test null PreparedStatement.executeLargeBatch();
*/
@Test
public void testNullExecuteLargeBatchStatement() throws Exception {
try (Statement stmt = con.createStatement()) {
long[] actual = stmt.executeLargeBatch();
Assert.assertArrayEquals("addBatch() not called batchStatements is null", new long[0], actual);
}
}
/*
* Test empty PreparedStatement.executeLargeBatch();
*/
@Test
public void testEmptyExecuteLargeBatchStatement() throws Exception {
try (Statement stmt = con.createStatement()) {
stmt.addBatch("");
stmt.clearBatch();
long[] actual = stmt.executeLargeBatch();
Assert.assertArrayEquals("clearBatch() called, batchStatements.isEmpty()", new long[0], actual);
}
}
/*
* Test null PreparedStatement.executeLargeBatch();
*/
@Test
public void testNullExecuteLargeBatchPreparedStatement() throws Exception {
try (PreparedStatement stmt = con.prepareStatement("")) {
long[] actual = stmt.executeLargeBatch();
Assert.assertArrayEquals("addBatch() not called batchStatements is null", new long[0], actual);
}
}
/*
* Test empty PreparedStatement.executeLargeBatch();
*/
@Test
public void testEmptyExecuteLargeBatchPreparedStatement() throws Exception {
try (PreparedStatement stmt = con.prepareStatement("")) {
stmt.addBatch();
stmt.clearBatch();
long[] actual = stmt.executeLargeBatch();
Assert.assertArrayEquals("clearBatch() called, batchStatements.isEmpty()", new long[0], actual);
}
}
}

View File

@ -1,3 +1,4 @@
<?xml version="1.0" encoding="utf-8" ?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<groupId>com.huawei</groupId> <groupId>com.huawei</groupId>
@ -27,8 +28,12 @@
<artifactId>maven-compiler-plugin</artifactId> <artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version> <version>3.8.0</version>
<configuration> <configuration>
<target>8</target> <target>1.8</target>
<source>8</source> <source>1.8</source>
<showWarnings>true</showWarnings>
<compilerArgs>
<arg>-Xlint:all</arg>
</compilerArgs>
</configuration> </configuration>
</plugin> </plugin>
</plugins> </plugins>

View File

@ -1,3 +1,4 @@
<?xml version="1.0" encoding="utf-8" ?>
<project xmlns="http://maven.apache.org/POM/4.0.0" <project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
@ -29,7 +30,7 @@
<dependency> <dependency>
<groupId>com.alibaba</groupId> <groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId> <artifactId>fastjson</artifactId>
<version>1.2.70</version> <version>1.2.75</version>
</dependency> </dependency>
<dependency> <dependency>
<groupId>joda-time</groupId> <groupId>joda-time</groupId>
@ -61,6 +62,11 @@
<artifactId>bcprov-jdk15on</artifactId> <artifactId>bcprov-jdk15on</artifactId>
<version>1.68</version> <version>1.68</version>
</dependency> </dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.30</version>
</dependency>
</dependencies> </dependencies>
<build> <build>
<plugins> <plugins>
@ -79,6 +85,10 @@
</goals> </goals>
<configuration> <configuration>
<relocations> <relocations>
<relocation>
<pattern>org.slf4j</pattern>
<shadedPattern>com.huawei.shade.org.slf4j</shadedPattern>
</relocation>
<relocation> <relocation>
<pattern>org.apache</pattern> <pattern>org.apache</pattern>
<shadedPattern>com.huawei.shade.org.apache</shadedPattern> <shadedPattern>com.huawei.shade.org.apache</shadedPattern>
@ -105,6 +115,13 @@
</relocation> </relocation>
</relocations> </relocations>
<filters> <filters>
<filter>
<artifact>org.slf4j:*</artifact>
<excludes>
<exclude>META-INF/LICENSE.txt</exclude>
<exclude>META-INF/NOTICE.txt</exclude>
</excludes>
</filter>
<filter> <filter>
<artifact>org.apache.httpcomponents:*</artifact> <artifact>org.apache.httpcomponents:*</artifact>
<excludes> <excludes>