[Fix](MySQLLoad) Fix load a big local file bug since bytebuffer from mysql packet using the same byte array (#16901)
Loading a big local file will cause `INTERNAL_ERROR]too many filtered rows` issue since the bytebuffer from mysql client always use the same byte array. And the later bytes will overwrite the previous one and make wrong bytes order among the network. Copy the byte array and then fill it into network.
This commit is contained in:
@ -81,10 +81,10 @@ public class MysqlLoadManager {
|
||||
InputStreamEntity entity = getInputStreamEntity(context, dataDesc.isClientLocal(), file);
|
||||
HttpPut request = generateRequestForMySqlLoad(entity, dataDesc, database, table, token);
|
||||
try (final CloseableHttpResponse response = httpclient.execute(request)) {
|
||||
JsonObject result = JsonParser.parseString(EntityUtils.toString(response.getEntity()))
|
||||
.getAsJsonObject();
|
||||
String body = EntityUtils.toString(response.getEntity());
|
||||
JsonObject result = JsonParser.parseString(body).getAsJsonObject();
|
||||
if (!result.get("Status").getAsString().equalsIgnoreCase("Success")) {
|
||||
LOG.warn("Execute stream load for mysql data load failed with message: " + request);
|
||||
LOG.warn("Execute mysql data load failed with request: {} and response: {}", request, body);
|
||||
throw new LoadException(result.get("Message").getAsString());
|
||||
}
|
||||
loadResult.incRecords(result.get("NumberLoadedRows").getAsLong());
|
||||
|
||||
Reference in New Issue
Block a user