# Proposed changes
[Parquet v1.11+ supports page skipping](https://github.com/apache/parquet-format/blob/master/PageIndex.md),
which helps the scanner reduce the amount of data scanned, decompressed, decoded, and insertion.
According to the performance FlameGraph, decompression takes up 20% cpu time.
If a page can be filtered as a whole, the page can not be decompressed.
However, the row numbers between pages are not aligned. Columns containing predicates can be filtered by page granularity,
but other columns need to be skipped within pages, so non predicate columns can only save the decoding and insertion time.
Array column needs the repetition level to align with other columns, so the array column can only save the decoding and insertion time.
## Explore
`OffsetIndex` in the column metadata can locate the page position.
Theoretically, a page can be completely skipped, including the time of reading from HDFS.
However, the average size of a page is around 500KB. Skipping a page requires calling the `skip`.
The performance of `skip` is low when it is called frequently,
and may not be better than continuous reading of large blocks of data (such as 4MB).
If multiple consecutive pages are filtered, `skip` reading can be performed according to`OffsetIndex`.
However, for the convenience of programming and readability, the data of all pages are loaded and filtered in turn.
Instead of add a cast function on literal, we directly change the literal type. This change could save cast execution time and memory.
For example:
In SQL:
"CASE WHEN l_orderkey > 0 THEN ...", 0 is a TinyIntLiteral.
Before this PR:
"CASE WHEN l_orderkey > CAST (TinyIntLiteral(0) AS INT)`
With this PR:
"CASE WHEN l_orderkey > IntegerLiteral(0)"
We used output list to compare two LogicalProperties before. Since join reorder will change the children order of a join plan and caused output list changed. the two join plan will not equals anymore in memo although they should be. So we must add a project on the new join to keep the LogicalProperties the same.
This PR changes the equals and hashCode funtions of LogicalProperties. use a set of output to compare two LogicalProperties. Then we do not need add the top peoject anymore. This help us keep memo simple and efficient.
Add some utils and provide the candidate row range (generated with skipped row range of each column)
to read for page index filter
this version support binary operator filter
todo:
- use context instead of structures in close()
- process complex type filter
- use this instead of row group minmax filter
- refactor _eval_binary() for row group filter and page index filter
Template for building internal query SQL statements,it mainly used for statistics module. After the template is defined, the executable statement will be built after the given parameters.
For example, template and parameters:
- template: `SELECT ${col} FROM ${table} WHERE id = ${id};`,
- parameters: `{col=colName, table=tableName, id=1}`
- result sql: `SELECT colName FROM tableName WHERE id = 1;`
usage:
```
String template = "SELECT * FROM ${table} WHERE id = ${id};";
Map<String, String> params = new HashMap<>();
params.put("table", "table0");
params.put("id", "123");
// result: SELECT * FROM table0 WHERE id = 123;
String result = InternalSqlTemplate.processTemplate(template, params);
```
* [docs](function) add a series of date function documents
add docs for `hours_add`, `hours_sub`, `minutes_add`, `minutes_sub`,
`seconds_add`, `seconds_sub`, `years_sub`, `years_add`, `months_add`,
`months_sub`, `days_add`, `days_add`, `weeks_add`, `weeks_sub` functions.
There is a problem with StatisticsTaskScheduler. The peek() method obtains a reference to the same task object, but the for-loop executes multiple removes.
Execute SQL query statements internally(in FE). Internal-query mainly used for statistics module, FE obtains statistics by SQL from BE, such as column maximum value, minimum value, etc.
This is a tool module as statistics, it will not affect the original code, also will not affect the use of users.
The simple usage process is as follows(the following code does no exception handling):
```
String dbName = "test";
String sql = "SELECT * FROM table0";
InternalQuery query = new InternalQuery(dbName, sql);
InternalQueryResult result = query.query();
List<ResultRow> resultRows = result.getResultRows();
for (ResultRow resultRow : resultRows) {
List<String> columns = resultRow.getColumns();
for (int i = 0; i < resultRow.getColumns().size(); i++) {
resultRow.getColumnIndex(columns.get(i));
resultRow.getColumnName(i);
resultRow.getColumnType(columns.get(i));
resultRow.getColumnType(i);
resultRow.getColumnValue(columns.get(i));
resultRow.getColumnValue(i);
}
}
```
Refactor the scanners for hms external catalog, work in progress.
Use VFileScanner, will remove NewFileParquetScanner, NewFileOrcScanner and NewFileTextScanner after fully tested.
Query for parquet file has been tested, still need to add readers for orc file, text file and load logic as well.
1. Add all slots used by onClause in project
```
(A & B) & C like
join(hash conjuncts: C.t2 = A.t2)
|---project(A.t2)
| +---join(hash conjuncts: A.t1 = B.t1)
| +---A
| +---B
+---C
transform to (A & C) & B
join(hash conjuncts: A.t1 = B.t1)
|---project(A.t2)
| +---join(hash conjuncts: C.t2 = A.t2)
| +---A
| +---C
+---B
```
But projection just include `A.t2`, can't find `A.t1`, we should add slots used by onClause when projection exist.
2. fix join reorder mark
Add mark `LAsscom` when apply `LAsscom`
3. remove slotReference
use `Slot` instead of `SlotReference` to avoid cast.
The predicate column type for char, varchar and string is PredicateColumnType<TYPE_STRING>, so _base_evaluate method should convert the input column to PredicateColumnType<TYPE_STRING> always.
This PR fix:
2 Backends.
Create tables with colocation group, 1 replica.
Decommission one of Backends.
The tablet on decommissioned Backend is not reduced.
This is a bug of ColocateTableCheckerAndBalancer.
Every time a new broker load comes in, Doris will update the start time of Kerberos authentication,
but this logic is wrong.
Because the authentication duration of Kerberos is calculated from the moment when the ticket is obtained.
This PR change the logic:
1. If it is kerberos, check fs expiration by create time.
2.Otherwise, check fs expiration by access time