[Fix](MySQLLoad) Fix load a big local file bug since bytebuffer from mysql packet using the same byte array (#16901)
Loading a big local file will cause `INTERNAL_ERROR]too many filtered rows` issue since the bytebuffer from mysql client always use the same byte array. And the later bytes will overwrite the previous one and make wrong bytes order among the network. Copy the byte array and then fill it into network.
This commit is contained in:
@ -42,10 +42,16 @@ public class ByteBufferNetworkInputStream extends InputStream {
|
||||
if (closed) {
|
||||
throw new IOException("Stream is already closed.");
|
||||
}
|
||||
ByteArrayInputStream inputStream = new ByteArrayInputStream(buffer.array(), buffer.position(), buffer.limit());
|
||||
ByteArrayInputStream inputStream = new ByteArrayInputStream(bytesCopy(buffer));
|
||||
queue.offer(inputStream, 300, TimeUnit.SECONDS);
|
||||
}
|
||||
|
||||
public byte[] bytesCopy(ByteBuffer buffer) {
|
||||
byte[] result = new byte[buffer.limit() - buffer.position()];
|
||||
System.arraycopy(buffer.array(), buffer.position(), result, 0, result.length);
|
||||
return result;
|
||||
}
|
||||
|
||||
public void markFinished() {
|
||||
this.finished = true;
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user