Support Grouping Sets, Rollup and Cube to extend group by statement
support GROUPING SETS syntax
```
SELECT a, b, SUM( c ) FROM tab1 GROUP BY GROUPING SETS ( (a, b), (a), (b), ( ) );
```
cube or rollup like
```
SELECT a, b,c, SUM( d ) FROM tab1 GROUP BY ROLLUP|CUBE(a,b,c)
```
[ADD] support grouping functions in expr like grouping(a) + grouping(b) (#2039)
[FIX] fix analyzer error in window function(#2039)
Standardize the return results of INSERT operations,
which is convenient for users to use and locate problems.
More details can be found in insert-into-manual.md
Random distribution is no longer supported since version 0.9.
And we need a way to convert the random distribution to hash distribution.
ALTER TABLE db.tbl SET ("distribution_type" = "hash");
1. Support specifying label to Insert Into stmt.
INSERT INTO tbl1 WITH LABEL label1 ...;
2. Return job' state corresponding to the existing label in result of stream load.
...
"Status": "Label Already Exists",
"ExistingJobStatus": "FINISHED"
...
3. Return the recent 2000 transactions in SHOW PROC '/transactions'
Currently, we do not support parsing encoded/compressed columns in file path, eg: extract column k1 from file path /path/to/dir/k1=1/xxx.csv
This patch is able to parse columns from file path like in Spark(Partition Discovery).
This patch parse partition columns at BrokerScanNode.java and save parsing result of each file path as a property of TBrokerRangeDesc, then the broker reader of BE can read the value of specified partition column.
* Broker load supports function
The commit support the column function in broker load.
The grammar of LoadStmt has not been changed.
Example:
columns terminated by ',' (tmp_c1, tmp_c2) set (c1=tmp_c1+tmp_c2)
Also, the old function is compatible such as default_value, strftime etc.
After this commit, there are no difference in column function between stream load and broker load except old function.