[refactor](Nereids) refactor cte analyze, rewrite and reuse code (#21727)

REFACTOR:

1. Generate CTEAnchor, CTEProducer, CTEConsumer when analyze.

For example, statement `WITH cte1 AS (SELECT * FROM t) SELECT * FROM cte1`.
Before this PR, we got analyzed plan like this:
```
logicalCTE(LogicalSubQueryAlias(cte1))
+-- logicalProject()
    +-- logicalCteConsumer()
```
we only have LogicalCteConsumer on the plan, but not LogicalCteProducer.
This is not a valid plan, and should not as the final result of analyze.
After this PR, we got analyzed plan like this:
```
logicalCteAnchor()
|-- logicalCteProducer()
+-- logicalProject()
    +-- logicalCteConsumer()
```
This is a valid plan with LogicalCteProducer and LogicalCteConsumer

2. Replace re-analyze unbound plan with deepCopy plan when do CTEInline

Because we generate LogicalCteAnchor and LogicalCteProducer when analyze.
So, we could not do re-analyze to gnerate CTE inline plan anymore.
The another reason is, we reuse relation id between unbound and bound relation.
So, if we do re-analyze on unresloved CTE plan, we will get two relation
with same RelationId. This is wrong, because we use RelationId to distinguish
two different relations.
This PR implement two helper class to deep copy a new plan from CTEProducer.
`LogicalPlanDeepCopier` and `ExpressionDeepCopier`

3. New rewrite framework to ensure do CTEInline in right way.

Before this PR, we do CTEInline before apply any rewrite rule.
But sometimes, some CteConsumer could be eliminated after rewrite.
After this PR, we do CTEInline after the plans relaying on CTEProducer have
been rewritten. So we could do CTEInline if some the count of CTEConsumer
decrease under the threshold of CTEInline.

4. add relation id to all relation plan node
5. let all relation generated from table implement trait CatalogRelation
6. reuse relation id between unbound relation and relation after bind


ENHANCEMENT:

1. Pull up CTEAnchor before RBO to avoid break other rules' pattern

Before this PR, we will generate CTEAnchor and LogicalCTE in the middle of plan.
So all rules should process LogicalCTEAnchor, otherwise will generate unexpected plan.
For example, push down filter and push down project should add pattern like:
```
logicalProject(logicalCTE)
...
logicalFilter(logicalCteAnchor)
...
```
project and filter must be push through these virtual plan node to ensure all project
and filter could be merged togather and get right order of them. for Example:
```
logicalProject
+-- logicalFilter
    +-- logicalCteAnchor
        +-- logicalProject
            +-- logicalFilter
                +-- logicalOlapScan
```
upper plan will lead to translation error. because we could not do twice filter and
project on bottom logicalOlapScan.


BUGFIX:

1. Recursive analyze LogicalCTE to avoid bind outer relation on inner CTE

For example
```sql
SELECT * FROM (WITH cte1 AS (SELECT * FROM t1) SELECT * FROM cte1)v1, cte1 v2; 
```
Before this PR, we will use nested cte name to bind outer plan.
So the outer cte1 with alias v2 will bound on the inner cte1.
After this PR, the sql will throw Table not exists exception when binding.

2. Use right way do withChildren in CTEProducer and remove projects in it

Before this PR, we add an attr named projects in CTEProducer to represent the output
of it. This is because we cannot get right output of it by call `getOutput` method on it.
The root reason of that is the wrong implementation of computeOutput of LogicalCteProducer.
This PR fix this problem and remove projects attr of CTEProducer.

3. Adjust nullable rule update CTEConsumer's output by CTEProducer's output

This PR process nullable on LogicalCteConsumer to ensure CteConsumer's output with right
nullable info, if the CteProducer's output nullable has been adjusted.

4. Bind set operation expression should not change children's output's nullable

This PR use fix a problem introduced by prvious PR #21168. The nullable info of
SetOperation's children should not changed after binding SetOperation.
This commit is contained in:
morrySnow
2023-07-19 11:41:41 +08:00
committed by GitHub
parent c28b90a301
commit d987f782d2
182 changed files with 3136 additions and 2704 deletions

View File

@ -22,7 +22,7 @@ import org.apache.doris.tablefunction.TableValuedFunctionIf;
import java.util.List;
public class FunctionGenTable extends Table {
private TableValuedFunctionIf tvf;
private final TableValuedFunctionIf tvf;
public FunctionGenTable(long id, String tableName, TableType type, List<Column> fullSchema,
TableValuedFunctionIf tvf) {

View File

@ -43,14 +43,14 @@ public class CTEContext {
/* build head CTEContext */
public CTEContext() {
this(null, null, CTEId.DEFAULT);
this(CTEId.DEFAULT, null, null);
}
/**
* CTEContext
*/
public CTEContext(@Nullable LogicalSubQueryAlias<Plan> parsedPlan,
@Nullable CTEContext previousCteContext, CTEId cteId) {
public CTEContext(CTEId cteId, @Nullable LogicalSubQueryAlias<Plan> parsedPlan,
@Nullable CTEContext previousCteContext) {
if ((parsedPlan == null && previousCteContext != null) || (parsedPlan != null && previousCteContext == null)) {
throw new AnalysisException("Only first CteContext can contains null cte plan or previousCteContext");
}
@ -78,7 +78,7 @@ public class CTEContext {
/**
* Get for CTE reuse.
*/
public Optional<LogicalPlan> getReuse(String cteName) {
public Optional<LogicalPlan> getAnalyzedCTEPlan(String cteName) {
if (!findCTEContext(cteName).isPresent()) {
return Optional.empty();
}

View File

@ -26,10 +26,8 @@ import org.apache.doris.nereids.analyzer.UnboundRelation;
import org.apache.doris.nereids.jobs.Job;
import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.jobs.executor.Analyzer;
import org.apache.doris.nereids.jobs.rewrite.CustomRewriteJob;
import org.apache.doris.nereids.jobs.rewrite.RewriteBottomUpJob;
import org.apache.doris.nereids.jobs.rewrite.RewriteTopDownJob;
import org.apache.doris.nereids.jobs.rewrite.RootPlanTreeRewriteJob.RootRewriteJobContext;
import org.apache.doris.nereids.jobs.scheduler.JobPool;
import org.apache.doris.nereids.jobs.scheduler.JobScheduler;
import org.apache.doris.nereids.jobs.scheduler.JobStack;
@ -39,22 +37,21 @@ import org.apache.doris.nereids.memo.Group;
import org.apache.doris.nereids.memo.Memo;
import org.apache.doris.nereids.processor.post.RuntimeFilterContext;
import org.apache.doris.nereids.properties.PhysicalProperties;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleFactory;
import org.apache.doris.nereids.rules.RuleSet;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.rules.analysis.BindRelation.CustomTableResolver;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.SubqueryExpr;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTE;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEConsumer;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalSubQueryAlias;
import org.apache.doris.nereids.trees.plans.visitor.CustomRewriter;
import org.apache.doris.qe.ConnectContext;
import org.apache.doris.qe.SessionVariable;
import org.apache.doris.statistics.ColumnStatistic;
@ -69,10 +66,10 @@ import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.Stack;
import java.util.concurrent.Callable;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import javax.annotation.Nullable;
@ -84,12 +81,11 @@ public class CascadesContext implements ScheduleContext {
// in analyze/rewrite stage, the plan will storage in this field
private Plan plan;
private Optional<RootRewriteJobContext> currentRootRewriteJobContext;
// in optimize stage, the plan will storage in the memo
private Memo memo;
private final StatementContext statementContext;
private CTEContext cteContext;
private final CTEContext cteContext;
private final RuleSet ruleSet;
private final JobPool jobPool;
private final JobScheduler jobScheduler;
@ -103,19 +99,9 @@ public class CascadesContext implements ScheduleContext {
private boolean isRewriteRoot;
private volatile boolean isTimeout = false;
private Map<CTEId, Set<LogicalCTEConsumer>> cteIdToConsumers = new HashMap<>();
private Map<CTEId, Callable<LogicalPlan>> cteIdToCTEClosure = new HashMap<>();
private Map<CTEId, Set<Expression>> cteIdToProjects = new HashMap<>();
private Map<Integer, Set<Expression>> consumerIdToFilters = new HashMap<>();
private Map<CTEId, Set<Integer>> cteIdToConsumerUnderProjects = new HashMap<>();
// Used to update consumer's stats
private Map<CTEId, List<Pair<Map<Slot, Slot>, Group>>> cteIdToConsumerGroup = new HashMap<>();
public CascadesContext(Plan plan, Memo memo, StatementContext statementContext,
PhysicalProperties requestProperties) {
this(plan, memo, statementContext, new CTEContext(), requestProperties);
}
// current process subtree, represent outer plan if empty
private final Optional<CTEId> currentTree;
private final Optional<CascadesContext> parent;
/**
* Constructor of OptimizerContext.
@ -123,55 +109,76 @@ public class CascadesContext implements ScheduleContext {
* @param memo {@link Memo} reference
* @param statementContext {@link StatementContext} reference
*/
public CascadesContext(Plan plan, Memo memo, StatementContext statementContext,
private CascadesContext(Optional<CascadesContext> parent, Optional<CTEId> currentTree,
StatementContext statementContext, Plan plan, Memo memo,
CTEContext cteContext, PhysicalProperties requireProperties) {
this.plan = plan;
this.parent = Objects.requireNonNull(parent, "parent should not null");
this.currentTree = Objects.requireNonNull(currentTree, "currentTree should not null");
this.statementContext = Objects.requireNonNull(statementContext, "statementContext should not null");
this.plan = Objects.requireNonNull(plan, "plan should not null");
this.memo = memo;
this.statementContext = statementContext;
this.cteContext = Objects.requireNonNull(cteContext, "cteContext should not null");
this.ruleSet = new RuleSet();
this.jobPool = new JobStack();
this.jobScheduler = new SimpleJobScheduler();
this.currentJobContext = new JobContext(this, requireProperties, Double.MAX_VALUE);
this.subqueryExprIsAnalyzed = new HashMap<>();
this.runtimeFilterContext = new RuntimeFilterContext(getConnectContext().getSessionVariable());
this.cteContext = cteContext;
}
public static CascadesContext newRewriteContext(StatementContext statementContext,
Plan initPlan, PhysicalProperties requireProperties) {
return new CascadesContext(initPlan, null, statementContext, requireProperties);
}
public static CascadesContext newRewriteContext(StatementContext statementContext,
Plan initPlan, CTEContext cteContext) {
return newRewriteContext(statementContext, initPlan, cteContext, PhysicalProperties.ANY);
}
public static CascadesContext newRewriteContext(StatementContext statementContext,
Plan initPlan, CTEContext cteContext, PhysicalProperties requireProperties) {
return new CascadesContext(initPlan, null, statementContext, cteContext, requireProperties);
}
/**
* New rewrite context.
* init a brand-new context to process whole tree
*/
public static CascadesContext newRewriteContext(CascadesContext context, Plan plan) {
return newRewriteContext(context, plan, PhysicalProperties.ANY);
public static CascadesContext initContext(StatementContext statementContext,
Plan initPlan, PhysicalProperties requireProperties) {
return newContext(Optional.empty(), Optional.empty(), statementContext,
initPlan, new CTEContext(), requireProperties);
}
/**
* use for analyze cte. we must pass CteContext from outer since we need to get right scope of cte
*/
public static CascadesContext newContextWithCteContext(CascadesContext cascadesContext,
Plan initPlan, CTEContext cteContext) {
return newContext(Optional.of(cascadesContext), Optional.empty(),
cascadesContext.getStatementContext(), initPlan, cteContext, PhysicalProperties.ANY);
}
public static CascadesContext newCurrentTreeContext(CascadesContext context) {
return CascadesContext.newContext(context.getParent(), context.getCurrentTree(), context.getStatementContext(),
context.getRewritePlan(), context.getCteContext(),
context.getCurrentJobContext().getRequiredProperties());
}
/**
* New rewrite context copy from current context, used in cbo rewriter.
*/
public static CascadesContext newRewriteContext(CascadesContext context,
public static CascadesContext newSubtreeContext(Optional<CTEId> subtree, CascadesContext context,
Plan plan, PhysicalProperties requireProperties) {
CascadesContext cascadesContext = CascadesContext.newRewriteContext(
context.getStatementContext(), plan, context.getCteContext(), requireProperties);
cascadesContext.cteIdToConsumers = context.cteIdToConsumers;
cascadesContext.cteIdToProjects = context.cteIdToProjects;
cascadesContext.cteContext = context.cteContext;
cascadesContext.cteIdToCTEClosure = context.cteIdToCTEClosure;
cascadesContext.consumerIdToFilters = context.consumerIdToFilters;
return cascadesContext;
return CascadesContext.newContext(Optional.of(context), subtree, context.getStatementContext(),
plan, context.getCteContext(), requireProperties);
}
private static CascadesContext newContext(Optional<CascadesContext> parent, Optional<CTEId> subtree,
StatementContext statementContext, Plan initPlan,
CTEContext cteContext, PhysicalProperties requireProperties) {
return new CascadesContext(parent, subtree, statementContext, initPlan, null, cteContext, requireProperties);
}
public CascadesContext getRoot() {
CascadesContext root = this;
while (root.getParent().isPresent()) {
root = root.getParent().get();
}
return root;
}
public Optional<CascadesContext> getParent() {
return parent;
}
public Optional<CTEId> getCurrentTree() {
return currentTree;
}
public synchronized void setIsTimeout(boolean isTimeout) {
@ -194,10 +201,6 @@ public class CascadesContext implements ScheduleContext {
return new Analyzer(this, customTableResolver);
}
public Analyzer newCustomAnalyzer(Optional<CustomTableResolver> customTableResolver) {
return new Analyzer(this, customTableResolver);
}
@Override
public void pushJob(Job job) {
jobPool.push(job);
@ -257,15 +260,6 @@ public class CascadesContext implements ScheduleContext {
this.plan = plan;
}
public Optional<RootRewriteJobContext> getCurrentRootRewriteJobContext() {
return currentRootRewriteJobContext;
}
public void setCurrentRootRewriteJobContext(
RootRewriteJobContext currentRootRewriteJobContext) {
this.currentRootRewriteJobContext = Optional.ofNullable(currentRootRewriteJobContext);
}
public void setSubqueryExprIsAnalyzed(SubqueryExpr subqueryExpr, boolean isAnalyzed) {
subqueryExprIsAnalyzed.put(subqueryExpr, isAnalyzed);
}
@ -282,41 +276,14 @@ public class CascadesContext implements ScheduleContext {
return execute(new RewriteBottomUpJob(memo.getRoot(), currentJobContext, ImmutableList.copyOf(rules)));
}
public CascadesContext bottomUpRewrite(Rule... rules) {
return bottomUpRewrite(ImmutableList.copyOf(rules));
}
public CascadesContext bottomUpRewrite(List<Rule> rules) {
return execute(new RewriteBottomUpJob(memo.getRoot(), rules, currentJobContext));
}
public CascadesContext topDownRewrite(RuleFactory... rules) {
return execute(new RewriteTopDownJob(memo.getRoot(), currentJobContext, ImmutableList.copyOf(rules)));
}
public CascadesContext topDownRewrite(Rule... rules) {
return topDownRewrite(ImmutableList.copyOf(rules));
}
public CascadesContext topDownRewrite(List<Rule> rules) {
return execute(new RewriteTopDownJob(memo.getRoot(), rules, currentJobContext));
}
public CascadesContext topDownRewrite(CustomRewriter customRewriter) {
CustomRewriteJob customRewriteJob = new CustomRewriteJob(() -> customRewriter, RuleType.TEST_REWRITE);
customRewriteJob.execute(currentJobContext);
toMemo();
return this;
}
public CTEContext getCteContext() {
return cteContext;
}
public void setCteContext(CTEContext cteContext) {
this.cteContext = cteContext;
}
public void setIsRewriteRoot(boolean isRewriteRoot) {
this.isRewriteRoot = isRewriteRoot;
}
@ -347,9 +314,8 @@ public class CascadesContext implements ScheduleContext {
if (statementContext == null) {
return defaultValue;
}
T cacheResult = statementContext.getOrRegisterCache(cacheName,
return statementContext.getOrRegisterCache(cacheName,
() -> variableSupplier.apply(connectContext.getSessionVariable()));
return cacheResult;
}
private CascadesContext execute(Job job) {
@ -392,9 +358,9 @@ public class CascadesContext implements ScheduleContext {
Set<UnboundRelation> unboundRelations = new HashSet<>();
logicalPlan.foreach(p -> {
if (p instanceof LogicalFilter) {
unboundRelations.addAll(extractUnboundRelationFromFilter((LogicalFilter) p));
unboundRelations.addAll(extractUnboundRelationFromFilter((LogicalFilter<?>) p));
} else if (p instanceof LogicalCTE) {
unboundRelations.addAll(extractUnboundRelationFromCTE((LogicalCTE) p));
unboundRelations.addAll(extractUnboundRelationFromCTE((LogicalCTE<?>) p));
} else {
unboundRelations.addAll(p.collect(UnboundRelation.class::isInstance));
}
@ -402,7 +368,7 @@ public class CascadesContext implements ScheduleContext {
return unboundRelations;
}
private Set<UnboundRelation> extractUnboundRelationFromFilter(LogicalFilter filter) {
private Set<UnboundRelation> extractUnboundRelationFromFilter(LogicalFilter<?> filter) {
Set<SubqueryExpr> subqueryExprs = filter.getPredicate()
.collect(SubqueryExpr.class::isInstance);
Set<UnboundRelation> relations = new HashSet<>();
@ -413,7 +379,7 @@ public class CascadesContext implements ScheduleContext {
return relations;
}
private Set<UnboundRelation> extractUnboundRelationFromCTE(LogicalCTE cte) {
private Set<UnboundRelation> extractUnboundRelationFromCTE(LogicalCTE<?> cte) {
List<LogicalSubQueryAlias<Plan>> subQueryAliases = cte.getAliasQueries();
Set<UnboundRelation> relations = new HashSet<>();
for (LogicalSubQueryAlias<Plan> subQueryAlias : subQueryAliases) {
@ -463,7 +429,7 @@ public class CascadesContext implements ScheduleContext {
CascadesContext cascadesContext;
private Stack<Table> locked = new Stack<>();
private final Stack<Table> locked = new Stack<>();
/**
* Try to acquire read locks on tables, throw runtime exception once the acquiring for read lock failed.
@ -491,93 +457,49 @@ public class CascadesContext implements ScheduleContext {
}
}
public void putCTEIdToCTEClosure(CTEId cteId, Callable<LogicalPlan> cteClosure) {
this.cteIdToCTEClosure.put(cteId, cteClosure);
}
public void putAllCTEIdToCTEClosure(Map<CTEId, Callable<LogicalPlan>> cteConsumers) {
this.cteIdToCTEClosure.putAll(cteConsumers);
}
public void putCTEIdToConsumer(LogicalCTEConsumer cteConsumer) {
Set<LogicalCTEConsumer> consumers =
this.cteIdToConsumers.computeIfAbsent(cteConsumer.getCteId(), k -> new HashSet<>());
Set<LogicalCTEConsumer> consumers = this.statementContext.getCteIdToConsumers()
.computeIfAbsent(cteConsumer.getCteId(), k -> new HashSet<>());
consumers.add(cteConsumer);
}
public void putAllCTEIdToConsumer(Map<CTEId, Set<LogicalCTEConsumer>> cteConsumers) {
this.cteIdToConsumers.putAll(cteConsumers);
}
public void putCTEIdToProject(CTEId cteId, Expression p) {
Set<Expression> projects = this.cteIdToProjects.computeIfAbsent(cteId, k -> new HashSet<>());
public void putCTEIdToProject(CTEId cteId, NamedExpression p) {
Set<NamedExpression> projects = this.statementContext.getCteIdToProjects()
.computeIfAbsent(cteId, k -> new HashSet<>());
projects.add(p);
}
public Set<Expression> getProjectForProducer(CTEId cteId) {
return this.cteIdToProjects.get(cteId);
}
/**
* Fork for rewritten child tree of CTEProducer.
*/
public CascadesContext forkForCTEProducer(Plan plan) {
CascadesContext cascadesContext = new CascadesContext(plan, memo, statementContext, PhysicalProperties.ANY);
cascadesContext.cteIdToConsumers = cteIdToConsumers;
cascadesContext.cteIdToProjects = cteIdToProjects;
cascadesContext.cteContext = cteContext;
cascadesContext.cteIdToCTEClosure = cteIdToCTEClosure;
cascadesContext.consumerIdToFilters = consumerIdToFilters;
return cascadesContext;
}
public int cteReferencedCount(CTEId cteId) {
Set<LogicalCTEConsumer> cteConsumer = cteIdToConsumers.get(cteId);
if (cteConsumer == null) {
return 0;
}
return cteIdToConsumers.get(cteId).size();
public Set<NamedExpression> getProjectForProducer(CTEId cteId) {
return this.statementContext.getCteIdToProjects().get(cteId);
}
public Map<CTEId, Set<LogicalCTEConsumer>> getCteIdToConsumers() {
return cteIdToConsumers;
return this.statementContext.getCteIdToConsumers();
}
public Map<CTEId, Callable<LogicalPlan>> getCteIdToCTEClosure() {
return cteIdToCTEClosure;
}
public LogicalPlan findCTEPlanForInline(CTEId cteId) {
try {
return cteIdToCTEClosure.get(cteId).call();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public void putConsumerIdToFilter(int id, Expression filter) {
Set<Expression> filters = this.consumerIdToFilters.computeIfAbsent(id, k -> new HashSet<>());
public void putConsumerIdToFilter(RelationId id, Expression filter) {
Set<Expression> filters = this.getConsumerIdToFilters().computeIfAbsent(id, k -> new HashSet<>());
filters.add(filter);
}
public Map<Integer, Set<Expression>> getConsumerIdToFilters() {
return consumerIdToFilters;
public Map<RelationId, Set<Expression>> getConsumerIdToFilters() {
return this.statementContext.getConsumerIdToFilters();
}
public void markConsumerUnderProject(LogicalCTEConsumer cteConsumer) {
Set<Integer> consumerIds =
this.cteIdToConsumerUnderProjects.computeIfAbsent(cteConsumer.getCteId(), k -> new HashSet<>());
consumerIds.add(cteConsumer.getConsumerId());
Set<RelationId> consumerIds = this.statementContext.getCteIdToConsumerUnderProjects()
.computeIfAbsent(cteConsumer.getCteId(), k -> new HashSet<>());
consumerIds.add(cteConsumer.getRelationId());
}
public boolean couldPruneColumnOnProducer(CTEId cteId) {
Set<Integer> consumerIds = this.cteIdToConsumerUnderProjects.get(cteId);
return consumerIds.size() == this.cteIdToConsumers.get(cteId).size();
Set<RelationId> consumerIds = this.statementContext.getCteIdToConsumerUnderProjects().get(cteId);
return consumerIds.size() == this.statementContext.getCteIdToConsumers().get(cteId).size();
}
public void addCTEConsumerGroup(CTEId cteId, Group g, Map<Slot, Slot> producerSlotToConsumerSlot) {
List<Pair<Map<Slot, Slot>, Group>> consumerGroups =
this.cteIdToConsumerGroup.computeIfAbsent(cteId, k -> new ArrayList<>());
this.statementContext.getCteIdToConsumerGroup().computeIfAbsent(cteId, k -> new ArrayList<>());
consumerGroups.add(Pair.of(producerSlotToConsumerSlot, g));
}
@ -585,7 +507,7 @@ public class CascadesContext implements ScheduleContext {
* Update CTE consumer group as producer's stats update
*/
public void updateConsumerStats(CTEId cteId, Statistics statistics) {
List<Pair<Map<Slot, Slot>, Group>> consumerGroups = this.cteIdToConsumerGroup.get(cteId);
List<Pair<Map<Slot, Slot>, Group>> consumerGroups = this.statementContext.getCteIdToConsumerGroup().get(cteId);
for (Pair<Map<Slot, Slot>, Group> p : consumerGroups) {
Map<Slot, Slot> producerSlotToConsumerSlot = p.first;
Statistics updatedConsumerStats = new Statistics(statistics);

View File

@ -268,7 +268,7 @@ public class NereidsPlanner extends Planner {
}
private void initCascadesContext(LogicalPlan plan, PhysicalProperties requireProperties) {
cascadesContext = CascadesContext.newRewriteContext(statementContext, plan, requireProperties);
cascadesContext = CascadesContext.initContext(statementContext, plan, requireProperties);
if (statementContext.getConnectContext().getTables() != null) {
cascadesContext.setTables(statementContext.getConnectContext().getTables());
}
@ -283,7 +283,7 @@ public class NereidsPlanner extends Planner {
* Logical plan rewrite based on a series of heuristic rules.
*/
private void rewrite() {
new Rewriter(cascadesContext).execute();
Rewriter.getWholeTreeRewriter(cascadesContext).execute();
NereidsTracer.logImportantTime("EndRewritePlan");
}

View File

@ -19,10 +19,18 @@ package org.apache.doris.nereids;
import org.apache.doris.analysis.StatementBase;
import org.apache.doris.common.IdGenerator;
import org.apache.doris.common.Pair;
import org.apache.doris.nereids.memo.Group;
import org.apache.doris.nereids.rules.analysis.ColumnAliasGenerator;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.expressions.ExprId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEConsumer;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.qe.ConnectContext;
import org.apache.doris.qe.OriginStatement;
@ -30,7 +38,8 @@ import com.google.common.base.Supplier;
import com.google.common.base.Suppliers;
import com.google.common.collect.Maps;
import java.util.HashSet;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import javax.annotation.concurrent.GuardedBy;
@ -42,29 +51,31 @@ public class StatementContext {
private ConnectContext connectContext;
@GuardedBy("this")
private final Map<String, Supplier<Object>> contextCacheMap = Maps.newLinkedHashMap();
private OriginStatement originStatement;
// NOTICE: we set the plan parsed by DorisParser to parsedStatement and if the plan is command, create a
// LogicalPlanAdapter with the logical plan in the command.
private StatementBase parsedStatement;
private ColumnAliasGenerator columnAliasGenerator;
private int maxNAryInnerJoin = 0;
private boolean isDpHyp = false;
private boolean isOtherJoinReorder = false;
private final IdGenerator<ExprId> exprIdGenerator = ExprId.createGenerator();
private final IdGenerator<ObjectId> objectIdGenerator = ObjectId.createGenerator();
private final IdGenerator<RelationId> relationIdGenerator = RelationId.createGenerator();
private final IdGenerator<CTEId> cteIdGenerator = CTEId.createGenerator();
@GuardedBy("this")
private final Map<String, Supplier<Object>> contextCacheMap = Maps.newLinkedHashMap();
// NOTICE: we set the plan parsed by DorisParser to parsedStatement and if the plan is command, create a
// LogicalPlanAdapter with the logical plan in the command.
private StatementBase parsedStatement;
private Set<String> columnNames;
private ColumnAliasGenerator columnAliasGenerator;
private final Map<CTEId, Set<LogicalCTEConsumer>> cteIdToConsumers = new HashMap<>();
private final Map<CTEId, Set<NamedExpression>> cteIdToProjects = new HashMap<>();
private final Map<RelationId, Set<Expression>> consumerIdToFilters = new HashMap<>();
private final Map<CTEId, Set<RelationId>> cteIdToConsumerUnderProjects = new HashMap<>();
// Used to update consumer's stats
private final Map<CTEId, List<Pair<Map<Slot, Slot>, Group>>> cteIdToConsumerGroup = new HashMap<>();
private final Map<CTEId, LogicalPlan> rewrittenCtePlan = new HashMap<>();
public StatementContext() {
this.connectContext = ConnectContext.get();
@ -91,7 +102,7 @@ public class StatementContext {
return originStatement;
}
public void setMaxNArayInnerJoin(int maxNAryInnerJoin) {
public void setMaxNAryInnerJoin(int maxNAryInnerJoin) {
if (maxNAryInnerJoin > this.maxNAryInnerJoin) {
this.maxNAryInnerJoin = maxNAryInnerJoin;
}
@ -129,6 +140,10 @@ public class StatementContext {
return objectIdGenerator.getNextId();
}
public RelationId getNextRelationId() {
return relationIdGenerator.getNextId();
}
public void setParsedStatement(StatementBase parsedStatement) {
this.parsedStatement = parsedStatement;
}
@ -143,17 +158,9 @@ public class StatementContext {
return supplier.get();
}
public Set<String> getColumnNames() {
return columnNames == null ? new HashSet<>() : columnNames;
}
public void setColumnNames(Set<String> columnNames) {
this.columnNames = columnNames;
}
public ColumnAliasGenerator getColumnAliasGenerator() {
return columnAliasGenerator == null
? columnAliasGenerator = new ColumnAliasGenerator(this)
? columnAliasGenerator = new ColumnAliasGenerator()
: columnAliasGenerator;
}
@ -164,4 +171,28 @@ public class StatementContext {
public StatementBase getParsedStatement() {
return parsedStatement;
}
public Map<CTEId, Set<LogicalCTEConsumer>> getCteIdToConsumers() {
return cteIdToConsumers;
}
public Map<CTEId, Set<NamedExpression>> getCteIdToProjects() {
return cteIdToProjects;
}
public Map<RelationId, Set<Expression>> getConsumerIdToFilters() {
return consumerIdToFilters;
}
public Map<CTEId, Set<RelationId>> getCteIdToConsumerUnderProjects() {
return cteIdToConsumerUnderProjects;
}
public Map<CTEId, List<Pair<Map<Slot, Slot>, Group>>> getCteIdToConsumerGroup() {
return cteIdToConsumerGroup;
}
public Map<CTEId, LogicalPlan> getRewrittenCtePlan() {
return rewrittenCtePlan;
}
}

View File

@ -24,11 +24,11 @@ import org.apache.doris.nereids.properties.UnboundLogicalProperties;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.algebra.OneRowRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalLeaf;
import org.apache.doris.nereids.trees.plans.logical.LogicalRelation;
import org.apache.doris.nereids.trees.plans.visitor.PlanVisitor;
import org.apache.doris.nereids.util.Utils;
@ -36,37 +36,30 @@ import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
/**
* A relation that contains only one row consist of some constant expressions.
* e.g. select 100, 'value'
*/
public class UnboundOneRowRelation extends LogicalLeaf implements Unbound, OneRowRelation {
public class UnboundOneRowRelation extends LogicalRelation implements Unbound, OneRowRelation {
private final ObjectId id;
private final List<NamedExpression> projects;
public UnboundOneRowRelation(ObjectId id, List<NamedExpression> projects) {
this(id, projects, Optional.empty(), Optional.empty());
public UnboundOneRowRelation(RelationId relationId, List<NamedExpression> projects) {
this(relationId, projects, Optional.empty(), Optional.empty());
}
private UnboundOneRowRelation(ObjectId id,
private UnboundOneRowRelation(RelationId id,
List<NamedExpression> projects,
Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties) {
super(PlanType.LOGICAL_UNBOUND_ONE_ROW_RELATION, groupExpression, logicalProperties);
super(id, PlanType.LOGICAL_UNBOUND_ONE_ROW_RELATION, groupExpression, logicalProperties);
Preconditions.checkArgument(projects.stream().noneMatch(p -> p.containsType(Slot.class)),
"OneRowRelation can not contains any slot");
this.id = id;
this.projects = ImmutableList.copyOf(projects);
}
public ObjectId getId() {
return id;
}
@Override
public <R, C> R accept(PlanVisitor<R, C> visitor, C context) {
return visitor.visitUnboundOneRowRelation(this, context);
@ -84,13 +77,14 @@ public class UnboundOneRowRelation extends LogicalLeaf implements Unbound, OneRo
@Override
public Plan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new UnboundOneRowRelation(id, projects, groupExpression, Optional.of(logicalPropertiesSupplier.get()));
return new UnboundOneRowRelation(relationId, projects,
groupExpression, Optional.of(logicalPropertiesSupplier.get()));
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new UnboundOneRowRelation(id, projects, groupExpression, logicalProperties);
return new UnboundOneRowRelation(relationId, projects, groupExpression, logicalProperties);
}
@Override
@ -106,28 +100,8 @@ public class UnboundOneRowRelation extends LogicalLeaf implements Unbound, OneRo
@Override
public String toString() {
return Utils.toSqlString("UnboundOneRowRelation",
"relationId", id,
"relationId", relationId,
"projects", projects
);
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
if (!super.equals(o)) {
return false;
}
UnboundOneRowRelation that = (UnboundOneRowRelation) o;
return Objects.equals(id, that.id) && Objects.equals(projects, that.projects);
}
@Override
public int hashCode() {
return Objects.hash(id, projects);
}
}

View File

@ -17,16 +17,15 @@
package org.apache.doris.nereids.analyzer;
import org.apache.doris.catalog.Table;
import org.apache.doris.nereids.exceptions.UnboundException;
import org.apache.doris.nereids.memo.GroupExpression;
import org.apache.doris.nereids.properties.LogicalProperties;
import org.apache.doris.nereids.properties.UnboundLogicalProperties;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.logical.LogicalRelation;
import org.apache.doris.nereids.trees.plans.visitor.PlanVisitor;
import org.apache.doris.nereids.util.Utils;
@ -50,20 +49,20 @@ public class UnboundRelation extends LogicalRelation implements Unbound {
private final boolean isTempPart;
private final List<String> hints;
public UnboundRelation(ObjectId id, List<String> nameParts) {
public UnboundRelation(RelationId id, List<String> nameParts) {
this(id, nameParts, Optional.empty(), Optional.empty(), ImmutableList.of(), false, ImmutableList.of());
}
public UnboundRelation(ObjectId id, List<String> nameParts, List<String> partNames, boolean isTempPart) {
public UnboundRelation(RelationId id, List<String> nameParts, List<String> partNames, boolean isTempPart) {
this(id, nameParts, Optional.empty(), Optional.empty(), partNames, isTempPart, ImmutableList.of());
}
public UnboundRelation(ObjectId id, List<String> nameParts, List<String> partNames, boolean isTempPart,
public UnboundRelation(RelationId id, List<String> nameParts, List<String> partNames, boolean isTempPart,
List<String> hints) {
this(id, nameParts, Optional.empty(), Optional.empty(), partNames, isTempPart, hints);
}
public UnboundRelation(ObjectId id, List<String> nameParts, Optional<GroupExpression> groupExpression,
public UnboundRelation(RelationId id, List<String> nameParts, Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<String> partNames, boolean isTempPart,
List<String> hints) {
super(id, PlanType.LOGICAL_UNBOUND_RELATION, groupExpression, logicalProperties);
@ -73,11 +72,6 @@ public class UnboundRelation extends LogicalRelation implements Unbound {
this.hints = ImmutableList.copyOf(Objects.requireNonNull(hints, "hints should not be null."));
}
@Override
public Table getTable() {
throw new UnsupportedOperationException("unbound relation cannot get table");
}
public List<String> getNameParts() {
return nameParts;
}
@ -94,14 +88,15 @@ public class UnboundRelation extends LogicalRelation implements Unbound {
@Override
public Plan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new UnboundRelation(id, nameParts, groupExpression, Optional.of(getLogicalProperties()), partNames,
isTempPart, hints);
return new UnboundRelation(relationId, nameParts,
groupExpression, Optional.of(getLogicalProperties()),
partNames, isTempPart, hints);
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new UnboundRelation(id, nameParts, groupExpression, logicalProperties, partNames,
return new UnboundRelation(relationId, nameParts, groupExpression, logicalProperties, partNames,
isTempPart, hints);
}
@ -113,7 +108,7 @@ public class UnboundRelation extends LogicalRelation implements Unbound {
@Override
public String toString() {
List<Object> args = Lists.newArrayList(
"id", id,
"id", relationId,
"nameParts", StringUtils.join(nameParts, ".")
);
if (CollectionUtils.isNotEmpty(hints)) {
@ -133,30 +128,6 @@ public class UnboundRelation extends LogicalRelation implements Unbound {
throw new UnsupportedOperationException(this.getClass().getSimpleName() + " don't support getExpression()");
}
public ObjectId getId() {
return id;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
if (!super.equals(o)) {
return false;
}
UnboundRelation that = (UnboundRelation) o;
return id.equals(that.id);
}
@Override
public int hashCode() {
return Objects.hash(id);
}
public List<String> getPartNames() {
return partNames;
}

View File

@ -25,11 +25,11 @@ import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.TVFProperties;
import org.apache.doris.nereids.trees.expressions.functions.table.TableValuedFunction;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.algebra.TVFRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalLeaf;
import org.apache.doris.nereids.trees.plans.logical.LogicalRelation;
import org.apache.doris.nereids.trees.plans.visitor.PlanVisitor;
import org.apache.doris.nereids.util.Utils;
@ -38,20 +38,18 @@ import java.util.Objects;
import java.util.Optional;
/** UnboundTVFRelation */
public class UnboundTVFRelation extends LogicalLeaf implements TVFRelation, Unbound {
public class UnboundTVFRelation extends LogicalRelation implements TVFRelation, Unbound {
private final ObjectId id;
private final String functionName;
private final TVFProperties properties;
public UnboundTVFRelation(ObjectId id, String functionName, TVFProperties properties) {
public UnboundTVFRelation(RelationId id, String functionName, TVFProperties properties) {
this(id, functionName, properties, Optional.empty(), Optional.empty());
}
public UnboundTVFRelation(ObjectId id, String functionName, TVFProperties properties,
public UnboundTVFRelation(RelationId id, String functionName, TVFProperties properties,
Optional<GroupExpression> groupExpression, Optional<LogicalProperties> logicalProperties) {
super(PlanType.LOGICAL_UNBOUND_TVF_RELATION, groupExpression, logicalProperties);
this.id = id;
super(id, PlanType.LOGICAL_UNBOUND_TVF_RELATION, groupExpression, logicalProperties);
this.functionName = Objects.requireNonNull(functionName, "functionName can not be null");
this.properties = Objects.requireNonNull(properties, "properties can not be null");
}
@ -64,10 +62,6 @@ public class UnboundTVFRelation extends LogicalLeaf implements TVFRelation, Unbo
return properties;
}
public ObjectId getId() {
return id;
}
@Override
public TableValuedFunction getFunction() {
throw new UnboundException("getFunction");
@ -95,14 +89,14 @@ public class UnboundTVFRelation extends LogicalLeaf implements TVFRelation, Unbo
@Override
public Plan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new UnboundTVFRelation(id, functionName, properties, groupExpression,
return new UnboundTVFRelation(relationId, functionName, properties, groupExpression,
Optional.of(getLogicalProperties()));
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new UnboundTVFRelation(id, functionName, properties, groupExpression, logicalProperties);
return new UnboundTVFRelation(relationId, functionName, properties, groupExpression, logicalProperties);
}
@Override
@ -112,24 +106,4 @@ public class UnboundTVFRelation extends LogicalLeaf implements TVFRelation, Unbo
"arguments", properties
);
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
if (!super.equals(o)) {
return false;
}
UnboundTVFRelation that = (UnboundTVFRelation) o;
return functionName.equals(that.functionName) && properties.equals(that.properties) && id.equals(that.id);
}
@Override
public int hashCode() {
return Objects.hash(super.hashCode(), functionName, properties, id);
}
}

View File

@ -411,7 +411,7 @@ public class PhysicalPlanTranslator extends DefaultPlanVisitor<PlanFragment, Pla
context.addScanNode(scanNode);
ScanNode finalScanNode = scanNode;
context.getRuntimeTranslator().ifPresent(
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(fileScan.getId()).forEach(
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(fileScan.getRelationId()).forEach(
expr -> runtimeFilterGenerator.translateRuntimeFilterTarget(expr, finalScanNode, context)
)
);
@ -453,7 +453,7 @@ public class PhysicalPlanTranslator extends DefaultPlanVisitor<PlanFragment, Pla
Utils.execWithUncheckedException(esScanNode::init);
context.addScanNode(esScanNode);
context.getRuntimeTranslator().ifPresent(
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(esScan.getId()).forEach(
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(esScan.getRelationId()).forEach(
expr -> runtimeFilterGenerator.translateRuntimeFilterTarget(expr, esScanNode, context)
)
);
@ -475,7 +475,7 @@ public class PhysicalPlanTranslator extends DefaultPlanVisitor<PlanFragment, Pla
Utils.execWithUncheckedException(jdbcScanNode::init);
context.addScanNode(jdbcScanNode);
context.getRuntimeTranslator().ifPresent(
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(jdbcScan.getId()).forEach(
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(jdbcScan.getRelationId()).forEach(
expr -> runtimeFilterGenerator.translateRuntimeFilterTarget(expr, jdbcScanNode, context)
)
);
@ -541,8 +541,9 @@ public class PhysicalPlanTranslator extends DefaultPlanVisitor<PlanFragment, Pla
// TODO: process translate runtime filter in one place
// use real plan node to present rf apply and rf generator
context.getRuntimeTranslator().ifPresent(
runtimeFilterTranslator -> runtimeFilterTranslator.getTargetOnScanNode(olapScan.getId()).forEach(
expr -> runtimeFilterTranslator.translateRuntimeFilterTarget(expr, olapScanNode, context)
runtimeFilterTranslator -> runtimeFilterTranslator.getTargetOnScanNode(olapScan.getRelationId())
.forEach(expr -> runtimeFilterTranslator.translateRuntimeFilterTarget(
expr, olapScanNode, context)
)
);
// TODO: we need to remove all finalizeForNereids
@ -599,8 +600,8 @@ public class PhysicalPlanTranslator extends DefaultPlanVisitor<PlanFragment, Pla
TupleDescriptor tupleDescriptor = generateTupleDesc(slots, table, context);
SchemaScanNode scanNode = new SchemaScanNode(context.nextPlanNodeId(), tupleDescriptor);
context.getRuntimeTranslator().ifPresent(
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(schemaScan.getId()).forEach(
expr -> runtimeFilterGenerator.translateRuntimeFilterTarget(expr, scanNode, context)
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(schemaScan.getRelationId())
.forEach(expr -> runtimeFilterGenerator.translateRuntimeFilterTarget(expr, scanNode, context)
)
);
scanNode.finalizeForNereids();
@ -614,14 +615,14 @@ public class PhysicalPlanTranslator extends DefaultPlanVisitor<PlanFragment, Pla
@Override
public PlanFragment visitPhysicalTVFRelation(PhysicalTVFRelation tvfRelation, PlanTranslatorContext context) {
List<Slot> slots = tvfRelation.getLogicalProperties().getOutput();
TupleDescriptor tupleDescriptor = generateTupleDesc(slots, tvfRelation.getTable(), context);
TupleDescriptor tupleDescriptor = generateTupleDesc(slots, tvfRelation.getFunction().getTable(), context);
TableValuedFunctionIf catalogFunction = tvfRelation.getFunction().getCatalogFunction();
ScanNode scanNode = catalogFunction.getScanNode(context.nextPlanNodeId(), tupleDescriptor);
Utils.execWithUncheckedException(scanNode::init);
context.getRuntimeTranslator().ifPresent(
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(tvfRelation.getId()).forEach(
expr -> runtimeFilterGenerator.translateRuntimeFilterTarget(expr, scanNode, context)
runtimeFilterGenerator -> runtimeFilterGenerator.getTargetOnScanNode(tvfRelation.getRelationId())
.forEach(expr -> runtimeFilterGenerator.translateRuntimeFilterTarget(expr, scanNode, context)
)
);
Utils.execWithUncheckedException(scanNode::finalizeForNereids);
@ -820,7 +821,7 @@ public class PhysicalPlanTranslator extends DefaultPlanVisitor<PlanFragment, Pla
multiCastDataSink.getDestinations().add(Lists.newArrayList());
// update expr to slot mapping
for (Slot producerSlot : cteProducer.getProjects()) {
for (Slot producerSlot : cteProducer.getOutput()) {
Slot consumerSlot = cteConsumer.getProducerToConsumerSlotMap().get(producerSlot);
SlotRef slotRef = context.findSlotRef(producerSlot.getExprId());
context.addExprIdSlotRefPair(consumerSlot.getExprId(), slotRef);
@ -834,21 +835,21 @@ public class PhysicalPlanTranslator extends DefaultPlanVisitor<PlanFragment, Pla
PlanFragment child = cteProducer.child().accept(this, context);
CTEId cteId = cteProducer.getCteId();
context.getPlanFragments().remove(child);
MultiCastPlanFragment cteProduce = new MultiCastPlanFragment(child);
MultiCastPlanFragment multiCastPlanFragment = new MultiCastPlanFragment(child);
MultiCastDataSink multiCastDataSink = new MultiCastDataSink();
cteProduce.setSink(multiCastDataSink);
multiCastPlanFragment.setSink(multiCastDataSink);
List<Expr> outputs = cteProducer.getProjects().stream()
List<Expr> outputs = cteProducer.getOutput().stream()
.map(e -> ExpressionTranslator.translate(e, context))
.collect(Collectors.toList());
cteProduce.setOutputExprs(outputs);
context.getCteProduceFragments().put(cteId, cteProduce);
multiCastPlanFragment.setOutputExprs(outputs);
context.getCteProduceFragments().put(cteId, multiCastPlanFragment);
context.getCteProduceMap().put(cteId, cteProducer);
if (context.getRuntimeTranslator().isPresent()) {
context.getRuntimeTranslator().get().getContext().getCteProduceMap().put(cteId, cteProducer);
}
context.getPlanFragments().add(cteProduce);
context.getPlanFragments().add(multiCastPlanFragment);
return child;
}

View File

@ -28,7 +28,7 @@ import org.apache.doris.nereids.trees.expressions.ExprId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.physical.AbstractPhysicalJoin;
import org.apache.doris.nereids.trees.plans.physical.RuntimeFilter;
import org.apache.doris.planner.HashJoinNode;
@ -68,7 +68,7 @@ public class RuntimeFilterTranslator {
return context;
}
public List<Slot> getTargetOnScanNode(ObjectId id) {
public List<Slot> getTargetOnScanNode(RelationId id) {
return context.getTargetOnOlapScanNodeMap().getOrDefault(id, Collections.emptyList());
}

View File

@ -20,6 +20,7 @@ package org.apache.doris.nereids.jobs.executor;
import org.apache.doris.nereids.CascadesContext;
import org.apache.doris.nereids.jobs.rewrite.RewriteJob;
import org.apache.doris.nereids.rules.analysis.AdjustAggregateNullableForEmptySet;
import org.apache.doris.nereids.rules.analysis.AnalyzeCTE;
import org.apache.doris.nereids.rules.analysis.BindExpression;
import org.apache.doris.nereids.rules.analysis.BindInsertTargetTable;
import org.apache.doris.nereids.rules.analysis.BindRelation;
@ -31,7 +32,6 @@ import org.apache.doris.nereids.rules.analysis.FillUpMissingSlots;
import org.apache.doris.nereids.rules.analysis.NormalizeRepeat;
import org.apache.doris.nereids.rules.analysis.ProjectToGlobalAggregate;
import org.apache.doris.nereids.rules.analysis.ProjectWithDistinctToAggregate;
import org.apache.doris.nereids.rules.analysis.RegisterCTE;
import org.apache.doris.nereids.rules.analysis.ReplaceExpressionByChildOutput;
import org.apache.doris.nereids.rules.analysis.ResolveOrdinalInOrderByAndGroupBy;
import org.apache.doris.nereids.rules.analysis.SubqueryToApply;
@ -49,9 +49,7 @@ public class Analyzer extends AbstractBatchJobExecutor {
public static final List<RewriteJob> DEFAULT_ANALYZE_JOBS = buildAnalyzeJobs(Optional.empty());
private Optional<CustomTableResolver> customTableResolver;
private List<RewriteJob> jobs;
private final List<RewriteJob> jobs;
/**
* Execute the analysis job with scope.
@ -63,7 +61,7 @@ public class Analyzer extends AbstractBatchJobExecutor {
public Analyzer(CascadesContext cascadesContext, Optional<CustomTableResolver> customTableResolver) {
super(cascadesContext);
this.customTableResolver = Objects.requireNonNull(customTableResolver, "customTableResolver cannot be null");
Objects.requireNonNull(customTableResolver, "customTableResolver cannot be null");
this.jobs = !customTableResolver.isPresent() ? DEFAULT_ANALYZE_JOBS : buildAnalyzeJobs(customTableResolver);
}
@ -81,21 +79,15 @@ public class Analyzer extends AbstractBatchJobExecutor {
private static List<RewriteJob> buildAnalyzeJobs(Optional<CustomTableResolver> customTableResolver) {
return jobs(
topDown(
new RegisterCTE()
),
topDown(new AnalyzeCTE()),
bottomUp(
new BindRelation(customTableResolver.orElse(null)),
new BindRelation(customTableResolver),
new CheckPolicy(),
new UserAuthentication(),
new BindExpression()
),
topDown(
new BindInsertTargetTable()
),
bottomUp(
new CheckBound()
),
topDown(new BindInsertTargetTable()),
bottomUp(new CheckBound()),
bottomUp(
new ProjectToGlobalAggregate(),
// this rule check's the logicalProject node's isDistinct property

View File

@ -18,11 +18,11 @@
package org.apache.doris.nereids.jobs.executor;
import org.apache.doris.nereids.CascadesContext;
import org.apache.doris.nereids.jobs.rewrite.CostBasedRewriteJob;
import org.apache.doris.nereids.jobs.rewrite.RewriteJob;
import org.apache.doris.nereids.processor.pre.EliminateLogicalSelectHint;
import org.apache.doris.nereids.rules.RuleSet;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.rules.analysis.AddDefaultLimit;
import org.apache.doris.nereids.rules.analysis.AdjustAggregateNullableForEmptySet;
import org.apache.doris.nereids.rules.analysis.AvgDistinctToSumDivCount;
import org.apache.doris.nereids.rules.analysis.CheckAfterRewrite;
@ -31,12 +31,12 @@ import org.apache.doris.nereids.rules.expression.CheckLegalityAfterRewrite;
import org.apache.doris.nereids.rules.expression.ExpressionNormalization;
import org.apache.doris.nereids.rules.expression.ExpressionOptimization;
import org.apache.doris.nereids.rules.expression.ExpressionRewrite;
import org.apache.doris.nereids.rules.rewrite.AddDefaultLimit;
import org.apache.doris.nereids.rules.rewrite.AdjustConjunctsReturnType;
import org.apache.doris.nereids.rules.rewrite.AdjustNullable;
import org.apache.doris.nereids.rules.rewrite.AggScalarSubQueryToWindowFunction;
import org.apache.doris.nereids.rules.rewrite.BuildAggForUnion;
import org.apache.doris.nereids.rules.rewrite.BuildCTEAnchorAndCTEProducer;
import org.apache.doris.nereids.rules.rewrite.CTEProducerRewrite;
import org.apache.doris.nereids.rules.rewrite.CTEInline;
import org.apache.doris.nereids.rules.rewrite.CheckAndStandardizeWindowFunctionAndFrame;
import org.apache.doris.nereids.rules.rewrite.CheckDataTypes;
import org.apache.doris.nereids.rules.rewrite.CheckMatchExpression;
@ -64,7 +64,6 @@ import org.apache.doris.nereids.rules.rewrite.InferFilterNotNull;
import org.apache.doris.nereids.rules.rewrite.InferJoinNotNull;
import org.apache.doris.nereids.rules.rewrite.InferPredicates;
import org.apache.doris.nereids.rules.rewrite.InferSetOperatorDistinct;
import org.apache.doris.nereids.rules.rewrite.InlineCTE;
import org.apache.doris.nereids.rules.rewrite.MergeFilters;
import org.apache.doris.nereids.rules.rewrite.MergeOneRowRelationIntoUnion;
import org.apache.doris.nereids.rules.rewrite.MergeProjects;
@ -74,6 +73,7 @@ import org.apache.doris.nereids.rules.rewrite.NormalizeSort;
import org.apache.doris.nereids.rules.rewrite.PruneFileScanPartition;
import org.apache.doris.nereids.rules.rewrite.PruneOlapScanPartition;
import org.apache.doris.nereids.rules.rewrite.PruneOlapScanTablet;
import org.apache.doris.nereids.rules.rewrite.PullUpCteAnchor;
import org.apache.doris.nereids.rules.rewrite.PushFilterInsideJoin;
import org.apache.doris.nereids.rules.rewrite.PushProjectIntoOneRowRelation;
import org.apache.doris.nereids.rules.rewrite.PushProjectThroughUnion;
@ -82,6 +82,7 @@ import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughWindow;
import org.apache.doris.nereids.rules.rewrite.PushdownLimit;
import org.apache.doris.nereids.rules.rewrite.PushdownTopNThroughWindow;
import org.apache.doris.nereids.rules.rewrite.ReorderJoin;
import org.apache.doris.nereids.rules.rewrite.RewriteCteChildren;
import org.apache.doris.nereids.rules.rewrite.SemiJoinCommute;
import org.apache.doris.nereids.rules.rewrite.SimplifyAggGroupBy;
import org.apache.doris.nereids.rules.rewrite.SplitLimit;
@ -96,15 +97,14 @@ import org.apache.doris.nereids.rules.rewrite.mv.SelectMaterializedIndexWithAggr
import org.apache.doris.nereids.rules.rewrite.mv.SelectMaterializedIndexWithoutAggregate;
import java.util.List;
import java.util.stream.Collectors;
/**
* Apply rules to optimize logical plan.
* Apply rules to rewrite logical plan.
*/
public class Rewriter extends AbstractBatchJobExecutor {
public static final List<RewriteJob> REWRITE_JOBS = jobs(
bottomUp(new InlineCTE()),
custom(RuleType.ADD_DEFAULT_LIMIT, AddDefaultLimit::new),
private static final List<RewriteJob> CTE_CHILDREN_REWRITE_JOBS = jobs(
topic("Plan Normalization",
topDown(
new EliminateOrderByConstant(),
@ -128,6 +128,7 @@ public class Rewriter extends AbstractBatchJobExecutor {
new ExtractSingleTableExpressionFromDisjunction()
)
),
// subquery unnesting relay on ExpressionNormalization to extract common factor expression
topic("Subquery unnesting",
costBased(
custom(RuleType.AGG_SCALAR_SUBQUERY_TO_WINDOW_FUNCTION,
@ -277,11 +278,6 @@ public class Rewriter extends AbstractBatchJobExecutor {
bottomUp(RuleSet.PUSH_DOWN_FILTERS),
custom(RuleType.ELIMINATE_UNNECESSARY_PROJECT, EliminateUnnecessaryProject::new)
),
topic("Match expression check",
topDown(
new CheckMatchExpression()
)
),
// this rule batch must keep at the end of rewrite to do some plan check
topic("Final rewrite and check",
custom(RuleType.ENSURE_PROJECT_ON_TOP_JOIN, EnsureProjectOnTopJoin::new),
@ -290,30 +286,74 @@ public class Rewriter extends AbstractBatchJobExecutor {
new MergeProjects()
),
custom(RuleType.ADJUST_CONJUNCTS_RETURN_TYPE, AdjustConjunctsReturnType::new),
custom(RuleType.ADJUST_NULLABLE, AdjustNullable::new),
bottomUp(
new ExpressionRewrite(CheckLegalityAfterRewrite.INSTANCE),
new CheckMatchExpression(),
new CheckAfterRewrite()
)),
topic("MATERIALIZED CTE", topDown(
)
),
topic("Push project and filter on cte consumer to cte producer",
topDown(
new CollectFilterAboveConsumer(),
new CollectProjectAboveConsumer(),
new BuildCTEAnchorAndCTEProducer()),
topDown(new CTEProducerRewrite()))
new CollectProjectAboveConsumer()
)
)
);
private static final List<RewriteJob> WHOLE_TREE_REWRITE_JOBS
= getWholeTreeRewriteJobs(true);
private static final List<RewriteJob> WHOLE_TREE_REWRITE_JOBS_WITHOUT_COST_BASED
= getWholeTreeRewriteJobs(false);
private final List<RewriteJob> rewriteJobs;
public Rewriter(CascadesContext cascadesContext) {
super(cascadesContext);
this.rewriteJobs = REWRITE_JOBS;
}
public Rewriter(CascadesContext cascadesContext, List<RewriteJob> rewriteJobs) {
private Rewriter(CascadesContext cascadesContext, List<RewriteJob> rewriteJobs) {
super(cascadesContext);
this.rewriteJobs = rewriteJobs;
}
public static Rewriter getWholeTreeRewriterWithoutCostBasedJobs(CascadesContext cascadesContext) {
return new Rewriter(cascadesContext, WHOLE_TREE_REWRITE_JOBS_WITHOUT_COST_BASED);
}
public static Rewriter getWholeTreeRewriter(CascadesContext cascadesContext) {
return new Rewriter(cascadesContext, WHOLE_TREE_REWRITE_JOBS);
}
public static Rewriter getCteChildrenRewriter(CascadesContext cascadesContext, List<RewriteJob> jobs) {
return new Rewriter(cascadesContext, jobs);
}
public static Rewriter getWholeTreeRewriterWithCustomJobs(CascadesContext cascadesContext, List<RewriteJob> jobs) {
return new Rewriter(cascadesContext, getWholeTreeRewriteJobs(jobs));
}
private static List<RewriteJob> getWholeTreeRewriteJobs(boolean withCostBased) {
List<RewriteJob> withoutCostBased = Rewriter.CTE_CHILDREN_REWRITE_JOBS.stream()
.filter(j -> !(j instanceof CostBasedRewriteJob))
.collect(Collectors.toList());
return getWholeTreeRewriteJobs(withCostBased ? CTE_CHILDREN_REWRITE_JOBS : withoutCostBased);
}
private static List<RewriteJob> getWholeTreeRewriteJobs(List<RewriteJob> jobs) {
return jobs(
topic("cte inline and pull up all cte anchor",
custom(RuleType.PULL_UP_CTE_ANCHOR, PullUpCteAnchor::new),
custom(RuleType.CTE_INLINE, CTEInline::new)
),
topic("process limit session variables",
custom(RuleType.ADD_DEFAULT_LIMIT, AddDefaultLimit::new)
),
topic("rewrite cte sub-tree",
custom(RuleType.REWRITE_CTE_CHILDREN, () -> new RewriteCteChildren(jobs))
),
topic("whole plan check",
custom(RuleType.ADJUST_NULLABLE, AdjustNullable::new)
)
);
}
@Override
public List<RewriteJob> getJobs() {
return rewriteJobs;

View File

@ -24,6 +24,8 @@ import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.jobs.executor.Optimizer;
import org.apache.doris.nereids.jobs.executor.Rewriter;
import org.apache.doris.nereids.memo.GroupExpression;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@ -48,37 +50,29 @@ public class CostBasedRewriteJob implements RewriteJob {
@Override
public void execute(JobContext jobContext) {
CascadesContext cascadesContext = jobContext.getCascadesContext();
CascadesContext skipCboRuleCtx = CascadesContext.newRewriteContext(
cascadesContext, cascadesContext.getRewritePlan(),
cascadesContext.getCurrentJobContext().getRequiredProperties());
CascadesContext applyCboRuleCtx = CascadesContext.newRewriteContext(
cascadesContext, cascadesContext.getRewritePlan(),
cascadesContext.getCurrentJobContext().getRequiredProperties());
CascadesContext currentCtx = jobContext.getCascadesContext();
CascadesContext skipCboRuleCtx = CascadesContext.newCurrentTreeContext(currentCtx);
CascadesContext applyCboRuleCtx = CascadesContext.newCurrentTreeContext(currentCtx);
// execute cbo rule on one candidate
new Rewriter(applyCboRuleCtx, rewriteJobs).execute();
Rewriter.getCteChildrenRewriter(applyCboRuleCtx, rewriteJobs).execute();
if (skipCboRuleCtx.getRewritePlan().deepEquals(applyCboRuleCtx.getRewritePlan())) {
// this means rewrite do not do anything
return;
}
// Do rewrite on 2 candidates
new Rewriter(skipCboRuleCtx, jobContext.getRemainJobs()).execute();
new Rewriter(applyCboRuleCtx, jobContext.getRemainJobs()).execute();
// Do optimize on 2 candidates
new Optimizer(skipCboRuleCtx).execute();
new Optimizer(applyCboRuleCtx).execute();
Optional<Pair<Cost, GroupExpression>> skipCboRuleCost = skipCboRuleCtx.getMemo().getRoot()
.getLowestCostPlan(skipCboRuleCtx.getCurrentJobContext().getRequiredProperties());
Optional<Pair<Cost, GroupExpression>> appliedCboRuleCost = applyCboRuleCtx.getMemo().getRoot()
.getLowestCostPlan(applyCboRuleCtx.getCurrentJobContext().getRequiredProperties());
// compare two candidates
Optional<Pair<Cost, GroupExpression>> skipCboRuleCost = getCost(currentCtx, skipCboRuleCtx, jobContext);
Optional<Pair<Cost, GroupExpression>> appliedCboRuleCost = getCost(currentCtx, applyCboRuleCtx, jobContext);
// If one of them optimize failed, just return
if (!skipCboRuleCost.isPresent() || !appliedCboRuleCost.isPresent()) {
LOG.warn("Cbo rewrite execute failed");
LOG.warn("Cbo rewrite execute failed on sql: {}, jobs are {}, plan is {}.",
currentCtx.getStatementContext().getOriginStatement().originStmt,
rewriteJobs, currentCtx.getRewritePlan());
return;
}
// If the candidate applied cbo rule is better, replace the original plan with it.
if (appliedCboRuleCost.get().first.getValue() < skipCboRuleCost.get().first.getValue()) {
cascadesContext.setRewritePlan(applyCboRuleCtx.getRewritePlan());
currentCtx.setRewritePlan(applyCboRuleCtx.getRewritePlan());
}
}
@ -87,4 +81,27 @@ public class CostBasedRewriteJob implements RewriteJob {
// TODO: currently, we do not support execute it more than once.
return true;
}
private Optional<Pair<Cost, GroupExpression>> getCost(CascadesContext currentCtx,
CascadesContext cboCtx, JobContext jobContext) {
// Do subtree rewrite
Rewriter.getCteChildrenRewriter(cboCtx, jobContext.getRemainJobs()).execute();
CascadesContext rootCtx = currentCtx.getRoot();
if (rootCtx.getRewritePlan() instanceof LogicalCTEAnchor) {
// set subtree rewrite cache
currentCtx.getStatementContext().getRewrittenCtePlan()
.put(currentCtx.getCurrentTree().orElse(null), (LogicalPlan) cboCtx.getRewritePlan());
// Do Whole tree rewrite
CascadesContext rootCtxCopy = CascadesContext.newCurrentTreeContext(rootCtx);
Rewriter.getWholeTreeRewriterWithoutCostBasedJobs(rootCtxCopy).execute();
// Do optimize
new Optimizer(rootCtxCopy).execute();
return rootCtxCopy.getMemo().getRoot().getLowestCostPlan(
rootCtxCopy.getCurrentJobContext().getRequiredProperties());
} else {
new Optimizer(cboCtx).execute();
return cboCtx.getMemo().getRoot().getLowestCostPlan(
cboCtx.getCurrentJobContext().getRequiredProperties());
}
}
}

View File

@ -21,6 +21,7 @@ import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.jobs.JobType;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import java.util.List;
import java.util.Objects;
@ -102,7 +103,10 @@ public class PlanTreeRewriteBottomUpJob extends PlanTreeRewriteJob {
Plan child = children.get(i);
RewriteJobContext childRewriteJobContext = new RewriteJobContext(
child, clearedStateContext, i, false);
pushJob(new PlanTreeRewriteBottomUpJob(childRewriteJobContext, context, rules));
// NOTICE: this relay on pull up cte anchor
if (!(rewriteJobContext.plan instanceof LogicalCTEAnchor)) {
pushJob(new PlanTreeRewriteBottomUpJob(childRewriteJobContext, context, rules));
}
}
}
@ -142,7 +146,10 @@ public class PlanTreeRewriteBottomUpJob extends PlanTreeRewriteJob {
// we should transform this new plan nodes too.
RewriteJobContext childRewriteJobContext = new RewriteJobContext(
child, rewriteJobContext, i, false);
pushJob(new PlanTreeRewriteBottomUpJob(childRewriteJobContext, context, rules));
// NOTICE: this relay on pull up cte anchor
if (!(rewriteJobContext.plan instanceof LogicalCTEAnchor)) {
pushJob(new PlanTreeRewriteBottomUpJob(childRewriteJobContext, context, rules));
}
}
}

View File

@ -21,6 +21,7 @@ import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.jobs.JobType;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import java.util.List;
import java.util.Objects;
@ -59,7 +60,10 @@ public class PlanTreeRewriteTopDownJob extends PlanTreeRewriteJob {
for (int i = children.size() - 1; i >= 0; i--) {
RewriteJobContext childRewriteJobContext = new RewriteJobContext(
children.get(i), newRewriteJobContext, i, false);
pushJob(new PlanTreeRewriteTopDownJob(childRewriteJobContext, context, rules));
// NOTICE: this relay on pull up cte anchor
if (!(rewriteJobContext.plan instanceof LogicalCTEAnchor)) {
pushJob(new PlanTreeRewriteTopDownJob(childRewriteJobContext, context, rules));
}
}
} else {
// All the children part are already visited. Just link the children plan to the current node.

View File

@ -50,8 +50,6 @@ public class RootPlanTreeRewriteJob implements RewriteJob {
context.getScheduleContext().pushJob(rewriteJob);
cascadesContext.getJobScheduler().executeJobPool(cascadesContext);
cascadesContext.setCurrentRootRewriteJobContext(null);
}
@Override
@ -72,7 +70,6 @@ public class RootPlanTreeRewriteJob implements RewriteJob {
RootRewriteJobContext(Plan plan, boolean childrenVisited, JobContext jobContext) {
super(plan, null, -1, childrenVisited);
this.jobContext = Objects.requireNonNull(jobContext, "jobContext cannot be null");
jobContext.getCascadesContext().setCurrentRootRewriteJobContext(this);
}
@Override

View File

@ -19,9 +19,6 @@ package org.apache.doris.nereids.memo;
import org.apache.doris.common.IdGenerator;
import org.apache.doris.common.Pair;
import org.apache.doris.nereids.CTEContext;
import org.apache.doris.nereids.CascadesContext;
import org.apache.doris.nereids.StatementContext;
import org.apache.doris.nereids.cost.Cost;
import org.apache.doris.nereids.cost.CostCalculator;
import org.apache.doris.nereids.metrics.EventChannel;
@ -263,17 +260,6 @@ public class Memo {
return planWithChildren.withGroupExpression(groupExpression);
}
/**
* Utility function to create a new {@link CascadesContext} with this Memo.
*/
public CascadesContext newCascadesContext(StatementContext statementContext) {
return new CascadesContext(null, this, statementContext, PhysicalProperties.ANY);
}
public CascadesContext newCascadesContext(StatementContext statementContext, CTEContext cteContext) {
return new CascadesContext(null, this, statementContext, cteContext, PhysicalProperties.ANY);
}
/**
* init memo by a first plan.
* @param plan first plan

View File

@ -161,6 +161,7 @@ import org.apache.doris.nereids.trees.expressions.Or;
import org.apache.doris.nereids.trees.expressions.OrderExpression;
import org.apache.doris.nereids.trees.expressions.Regexp;
import org.apache.doris.nereids.trees.expressions.ScalarSubquery;
import org.apache.doris.nereids.trees.expressions.StatementScopeIdGenerator;
import org.apache.doris.nereids.trees.expressions.Subtract;
import org.apache.doris.nereids.trees.expressions.TVFProperties;
import org.apache.doris.nereids.trees.expressions.TimestampArithmetic;
@ -244,7 +245,6 @@ import org.apache.doris.nereids.trees.plans.logical.UsingJoin;
import org.apache.doris.nereids.types.DataType;
import org.apache.doris.nereids.types.coercion.CharacterType;
import org.apache.doris.nereids.util.ExpressionUtils;
import org.apache.doris.nereids.util.RelationUtil;
import org.apache.doris.policy.FilterType;
import org.apache.doris.policy.PolicyTypeEnum;
import org.apache.doris.qe.ConnectContext;
@ -335,7 +335,7 @@ public class LogicalPlanBuilder extends DorisParserBaseVisitor<Object> {
@Override
public LogicalPlan visitUpdate(UpdateContext ctx) {
LogicalPlan query = withCheckPolicy(new UnboundRelation(
RelationUtil.newRelationId(), visitMultipartIdentifier(ctx.tableName)));
StatementScopeIdGenerator.newRelationId(), visitMultipartIdentifier(ctx.tableName)));
query = withTableAlias(query, ctx.tableAlias());
if (ctx.fromClause() != null) {
query = withRelations(query, ctx.fromClause().relation());
@ -354,7 +354,7 @@ public class LogicalPlanBuilder extends DorisParserBaseVisitor<Object> {
List<String> tableName = visitMultipartIdentifier(ctx.tableName);
List<String> partitions = ctx.partition == null ? ImmutableList.of() : visitIdentifierList(ctx.partition);
LogicalPlan query = withTableAlias(withCheckPolicy(
new UnboundRelation(RelationUtil.newRelationId(), tableName)), ctx.tableAlias());
new UnboundRelation(StatementScopeIdGenerator.newRelationId(), tableName)), ctx.tableAlias());
if (ctx.USING() != null) {
query = withRelations(query, ctx.relation());
}
@ -582,7 +582,8 @@ public class LogicalPlanBuilder extends DorisParserBaseVisitor<Object> {
}
LogicalPlan checkedRelation = withCheckPolicy(
new UnboundRelation(RelationUtil.newRelationId(), tableId, partitionNames, isTempPart, relationHints));
new UnboundRelation(StatementScopeIdGenerator.newRelationId(),
tableId, partitionNames, isTempPart, relationHints));
LogicalPlan plan = withTableAlias(checkedRelation, ctx.tableAlias());
for (LateralViewContext lateralViewContext : ctx.lateralView()) {
plan = withGenerate(plan, lateralViewContext);
@ -613,7 +614,7 @@ public class LogicalPlanBuilder extends DorisParserBaseVisitor<Object> {
String value = parseTVFPropertyItem(argument.value);
map.put(key, value);
}
LogicalPlan relation = new UnboundTVFRelation(RelationUtil.newRelationId(),
LogicalPlan relation = new UnboundTVFRelation(StatementScopeIdGenerator.newRelationId(),
functionName, new TVFProperties(map.build()));
return withTableAlias(relation, ctx.tableAlias());
});
@ -1488,7 +1489,7 @@ public class LogicalPlanBuilder extends DorisParserBaseVisitor<Object> {
return ParserUtils.withOrigin(selectCtx, () -> {
// fromClause does not exists.
List<NamedExpression> projects = getNamedExpressions(selectCtx.namedExpressionSeq());
return new UnboundOneRowRelation(RelationUtil.newRelationId(), projects);
return new UnboundOneRowRelation(StatementScopeIdGenerator.newRelationId(), projects);
});
}

View File

@ -26,8 +26,8 @@ import org.apache.doris.nereids.trees.expressions.ExprId;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.physical.AbstractPhysicalJoin;
import org.apache.doris.nereids.trees.plans.physical.PhysicalCTEProducer;
import org.apache.doris.nereids.trees.plans.physical.PhysicalHashJoin;
@ -63,7 +63,7 @@ public class RuntimeFilterContext {
private final Map<Plan, List<ExprId>> joinToTargetExprId = Maps.newHashMap();
// olap scan node that contains target of a runtime filter.
private final Map<ObjectId, List<Slot>> targetOnOlapScanNodeMap = Maps.newHashMap();
private final Map<RelationId, List<Slot>> targetOnOlapScanNodeMap = Maps.newHashMap();
private final List<org.apache.doris.planner.RuntimeFilter> legacyFilters = Lists.newArrayList();
@ -157,7 +157,7 @@ public class RuntimeFilterContext {
}
}
public void setTargetsOnScanNode(ObjectId id, Slot slot) {
public void setTargetsOnScanNode(RelationId id, Slot slot) {
this.targetOnOlapScanNodeMap.computeIfAbsent(id, k -> Lists.newArrayList()).add(slot);
}
@ -186,7 +186,7 @@ public class RuntimeFilterContext {
return targetExprIdToFilter;
}
public Map<ObjectId, List<Slot>> getTargetOnOlapScanNodeMap() {
public Map<RelationId, List<Slot>> getTargetOnOlapScanNodeMap() {
return targetOnOlapScanNodeMap;
}

View File

@ -203,7 +203,7 @@ public class RuntimeFilterGenerator extends PlanPostProcessor {
ImmutableList.of(bitmapContains.child(1)), type, i, join, isNot, -1L);
ctx.addJoinToTargetMap(join, olapScanSlot.getExprId());
ctx.setTargetExprIdToFilter(olapScanSlot.getExprId(), filter);
ctx.setTargetsOnScanNode(aliasTransferMap.get(targetSlot).first.getId(),
ctx.setTargetsOnScanNode(aliasTransferMap.get(targetSlot).first.getRelationId(),
olapScanSlot);
join.addBitmapRuntimeFilterCondition(bitmapRuntimeFilterCondition);
}
@ -322,7 +322,7 @@ public class RuntimeFilterGenerator extends PlanPostProcessor {
equalTo.right(), ImmutableList.of(olapScanSlot), type, exprOrder, join, buildSideNdv);
ctx.addJoinToTargetMap(join, olapScanSlot.getExprId());
ctx.setTargetExprIdToFilter(olapScanSlot.getExprId(), filter);
ctx.setTargetsOnScanNode(aliasTransferMap.get(unwrappedSlot).first.getId(), olapScanSlot);
ctx.setTargetsOnScanNode(aliasTransferMap.get(unwrappedSlot).first.getRelationId(), olapScanSlot);
}
}
@ -369,7 +369,7 @@ public class RuntimeFilterGenerator extends PlanPostProcessor {
}
targetList.add(olapScanSlot);
ctx.addJoinToTargetMap(join, olapScanSlot.getExprId());
ctx.setTargetsOnScanNode(aliasTransferMap.get(origSlot).first.getId(), olapScanSlot);
ctx.setTargetsOnScanNode(aliasTransferMap.get(origSlot).first.getRelationId(), olapScanSlot);
}
}
if (!targetList.isEmpty()) {
@ -612,7 +612,7 @@ public class RuntimeFilterGenerator extends PlanPostProcessor {
PhysicalOlapScan scan = entry.getValue();
targetList.add(targetSlot);
ctx.addJoinToTargetMap(join, targetSlot.getExprId());
ctx.setTargetsOnScanNode(scan.getId(), targetSlot);
ctx.setTargetsOnScanNode(scan.getRelationId(), targetSlot);
}
// build multi-target runtime filter
// since always on different join, set the expr_order as 0

View File

@ -131,7 +131,7 @@ public class RuntimeFilterPruner extends PlanPostProcessor {
@Override
public PhysicalRelation visitPhysicalScan(PhysicalRelation scan, CascadesContext context) {
RuntimeFilterContext rfCtx = context.getRuntimeFilterContext();
List<Slot> slots = rfCtx.getTargetOnOlapScanNodeMap().get(scan.getId());
List<Slot> slots = rfCtx.getTargetOnOlapScanNodeMap().get(scan.getRelationId());
if (slots != null) {
for (Slot slot : slots) {
//if this scan node is the target of any effective RF, it is effective source

View File

@ -42,8 +42,8 @@ import org.apache.doris.nereids.rules.exploration.join.SemiJoinSemiJoinTranspose
import org.apache.doris.nereids.rules.implementation.AggregateStrategies;
import org.apache.doris.nereids.rules.implementation.LogicalAssertNumRowsToPhysicalAssertNumRows;
import org.apache.doris.nereids.rules.implementation.LogicalCTEAnchorToPhysicalCTEAnchor;
import org.apache.doris.nereids.rules.implementation.LogicalCTEConsumeToPhysicalCTEConsume;
import org.apache.doris.nereids.rules.implementation.LogicalCTEProduceToPhysicalCTEProduce;
import org.apache.doris.nereids.rules.implementation.LogicalCTEConsumerToPhysicalCTEConsumer;
import org.apache.doris.nereids.rules.implementation.LogicalCTEProducerToPhysicalCTEProducer;
import org.apache.doris.nereids.rules.implementation.LogicalEmptyRelationToPhysicalEmptyRelation;
import org.apache.doris.nereids.rules.implementation.LogicalEsScanToPhysicalEsScan;
import org.apache.doris.nereids.rules.implementation.LogicalExceptToPhysicalExcept;
@ -76,8 +76,6 @@ import org.apache.doris.nereids.rules.rewrite.MergeProjects;
import org.apache.doris.nereids.rules.rewrite.PushdownAliasThroughJoin;
import org.apache.doris.nereids.rules.rewrite.PushdownExpressionsInHashCondition;
import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughAggregation;
import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughCTE;
import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughCTEAnchor;
import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughJoin;
import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughProject;
import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughRepeat;
@ -85,8 +83,6 @@ import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughSetOperation;
import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughSort;
import org.apache.doris.nereids.rules.rewrite.PushdownFilterThroughWindow;
import org.apache.doris.nereids.rules.rewrite.PushdownJoinOtherCondition;
import org.apache.doris.nereids.rules.rewrite.PushdownProjectThroughCTE;
import org.apache.doris.nereids.rules.rewrite.PushdownProjectThroughCTEAnchor;
import org.apache.doris.nereids.rules.rewrite.PushdownProjectThroughLimit;
import com.google.common.collect.ImmutableList;
@ -133,15 +129,11 @@ public class RuleSet {
new MergeFilters(),
new MergeGenerates(),
new MergeLimits(),
new PushdownFilterThroughCTE(),
new PushdownProjectThroughCTE(),
new PushdownFilterThroughCTEAnchor(),
new PushdownProjectThroughCTEAnchor(),
new PushdownAliasThroughJoin());
public static final List<Rule> IMPLEMENTATION_RULES = planRuleFactories()
.add(new LogicalCTEProduceToPhysicalCTEProduce())
.add(new LogicalCTEConsumeToPhysicalCTEConsume())
.add(new LogicalCTEProducerToPhysicalCTEProducer())
.add(new LogicalCTEConsumerToPhysicalCTEConsumer())
.add(new LogicalCTEAnchorToPhysicalCTEAnchor())
.add(new LogicalRepeatToPhysicalRepeat())
.add(new LogicalFilterToPhysicalFilter())

View File

@ -70,7 +70,7 @@ public enum RuleType {
PROJECT_TO_GLOBAL_AGGREGATE(RuleTypeClass.REWRITE),
PROJECT_WITH_DISTINCT_TO_AGGREGATE(RuleTypeClass.REWRITE),
AVG_DISTINCT_TO_SUM_DIV_COUNT(RuleTypeClass.REWRITE),
REGISTER_CTE(RuleTypeClass.REWRITE),
ANALYZE_CTE(RuleTypeClass.REWRITE),
RELATION_AUTHENTICATION(RuleTypeClass.VALIDATION),
ADJUST_NULLABLE_FOR_PROJECT_SLOT(RuleTypeClass.REWRITE),
@ -235,15 +235,12 @@ public enum RuleType {
// ensure having project on the top join
ENSURE_PROJECT_ON_TOP_JOIN(RuleTypeClass.REWRITE),
BUILD_CTE_ANCHOR_AND_CTE_PRODUCER(RuleTypeClass.REWRITE),
PULL_UP_CTE_ANCHOR(RuleTypeClass.REWRITE),
CTE_INLINE(RuleTypeClass.REWRITE),
REWRITE_CTE_CHILDREN(RuleTypeClass.REWRITE),
COLLECT_FILTER_ON_CONSUMER(RuleTypeClass.REWRITE),
COLLECT_PROJECT_ABOVE_CONSUMER(RuleTypeClass.REWRITE),
COLLECT_PROJECT_ABOVE_FILTER_CONSUMER(RuleTypeClass.REWRITE),
CTE_PRODUCER_REWRITE(RuleTypeClass.REWRITE),
PUSH_DOWN_PROJECT_THROUGH_CTE(RuleTypeClass.REWRITE),
PUSH_DOWN_PROJECT_THROUGH_CTE_ANCHOR(RuleTypeClass.REWRITE),
INLINE_CTE(RuleTypeClass.REWRITE),
REWRITE_SENTINEL(RuleTypeClass.REWRITE),
// exploration rules
@ -287,8 +284,8 @@ public enum RuleType {
LOGICAL_JOIN_TO_NESTED_LOOP_JOIN_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_PROJECT_TO_PHYSICAL_PROJECT_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_FILTER_TO_PHYSICAL_FILTER_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_CTE_PRODUCE_TO_PHYSICAL_CTE_PRODUCER_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_CTE_CONSUME_TO_PHYSICAL_CTE_CONSUMER_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_CTE_PRODUCER_TO_PHYSICAL_CTE_PRODUCER_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_CTE_CONSUMER_TO_PHYSICAL_CTE_CONSUMER_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_CTE_ANCHOR_TO_PHYSICAL_CTE_ANCHOR_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_SORT_TO_PHYSICAL_QUICK_SORT_RULE(RuleTypeClass.IMPLEMENTATION),
LOGICAL_TOP_N_TO_PHYSICAL_TOP_N_RULE(RuleTypeClass.IMPLEMENTATION),

View File

@ -0,0 +1,127 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.analysis;
import org.apache.doris.common.Pair;
import org.apache.doris.nereids.CTEContext;
import org.apache.doris.nereids.CascadesContext;
import org.apache.doris.nereids.exceptions.AnalysisException;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTE;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEProducer;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalSubQueryAlias;
import com.google.common.collect.ImmutableList;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
/**
* Register CTE, includes checking columnAliases, checking CTE name, analyzing each CTE and store the
* analyzed logicalPlan of CTE's query in CTEContext;
* A LogicalProject node will be added to the root of the initial logicalPlan if there exist columnAliases.
* Node LogicalCTE will be eliminated after registering.
*/
public class AnalyzeCTE extends OneAnalysisRuleFactory {
@Override
public Rule build() {
return logicalCTE().thenApply(ctx -> {
LogicalCTE<Plan> logicalCTE = ctx.root;
// step 1. analyzed all cte plan
Pair<CTEContext, List<LogicalCTEProducer<Plan>>> result = analyzeCte(logicalCTE, ctx.cascadesContext);
CascadesContext outerCascadesCtx = CascadesContext.newContextWithCteContext(
ctx.cascadesContext, logicalCTE.child(), result.first);
outerCascadesCtx.newAnalyzer().analyze();
Plan root = outerCascadesCtx.getRewritePlan();
// should construct anchor from back to front, because the cte behind depends on the front
for (int i = result.second.size() - 1; i >= 0; i--) {
root = new LogicalCTEAnchor<>(result.second.get(i).getCteId(), result.second.get(i), root);
}
return root;
}).toRule(RuleType.ANALYZE_CTE);
}
/**
* register and store CTEs in CTEContext
*/
private Pair<CTEContext, List<LogicalCTEProducer<Plan>>> analyzeCte(
LogicalCTE<Plan> logicalCTE, CascadesContext cascadesContext) {
CTEContext outerCteCtx = cascadesContext.getCteContext();
List<LogicalSubQueryAlias<Plan>> aliasQueries = logicalCTE.getAliasQueries();
List<LogicalCTEProducer<Plan>> cteProducerPlans = new ArrayList<>();
for (LogicalSubQueryAlias<Plan> aliasQuery : aliasQueries) {
String cteName = aliasQuery.getAlias();
if (outerCteCtx.containsCTE(cteName)) {
throw new AnalysisException("CTE name [" + cteName + "] cannot be used more than once.");
}
// we should use a chain to ensure visible of cte
CTEContext innerCteCtx = outerCteCtx;
LogicalPlan parsedCtePlan = (LogicalPlan) aliasQuery.child();
CascadesContext innerCascadesCtx = CascadesContext.newContextWithCteContext(
cascadesContext, parsedCtePlan, innerCteCtx);
innerCascadesCtx.newAnalyzer().analyze();
LogicalPlan analyzedCtePlan = (LogicalPlan) innerCascadesCtx.getRewritePlan();
checkColumnAlias(aliasQuery, analyzedCtePlan.getOutput());
CTEId cteId = cascadesContext.getStatementContext().getNextCTEId();
LogicalSubQueryAlias<Plan> logicalSubQueryAlias =
aliasQuery.withChildren(ImmutableList.of(analyzedCtePlan));
outerCteCtx = new CTEContext(cteId, logicalSubQueryAlias, outerCteCtx);
outerCteCtx.setAnalyzedPlan(logicalSubQueryAlias);
cteProducerPlans.add(new LogicalCTEProducer<>(cteId, logicalSubQueryAlias));
}
return Pair.of(outerCteCtx, cteProducerPlans);
}
/**
* check columnAliases' size and name
*/
private void checkColumnAlias(LogicalSubQueryAlias<Plan> aliasQuery, List<Slot> outputSlots) {
if (aliasQuery.getColumnAliases().isPresent()) {
List<String> columnAlias = aliasQuery.getColumnAliases().get();
// if the size of columnAlias is smaller than outputSlots' size, we will replace the corresponding number
// of front slots with columnAlias.
if (columnAlias.size() > outputSlots.size()) {
throw new AnalysisException("CTE [" + aliasQuery.getAlias() + "] returns "
+ columnAlias.size() + " columns, but " + outputSlots.size() + " labels were specified."
+ " The number of column labels must be smaller or equal to the number of returned columns.");
}
Set<String> names = new HashSet<>();
// column alias cannot be used more than once
columnAlias.forEach(alias -> {
if (names.contains(alias.toLowerCase())) {
throw new AnalysisException("Duplicated CTE column alias:"
+ " [" + alias.toLowerCase() + "] in CTE [" + aliasQuery.getAlias() + "]");
}
names.add(alias);
});
}
}
}

View File

@ -484,7 +484,7 @@ public class BindExpression implements AnalysisRuleFactory {
.map(project -> bindSlot(project, ImmutableList.of(), ctx.cascadesContext))
.map(project -> bindFunction(project, ctx.cascadesContext))
.collect(Collectors.toList());
return new LogicalOneRowRelation(projects);
return new LogicalOneRowRelation(oneRowRelation.getRelationId(), projects);
})
),
RuleType.BINDING_SET_OPERATION_SLOT.build(
@ -508,27 +508,18 @@ public class BindExpression implements AnalysisRuleFactory {
}
// we need to do cast before set operation, because we maybe use these slot to do shuffle
// so, we must cast it before shuffle to get correct hash code.
List<List<Expression>> castExpressions = setOperation.collectCastExpressions();
List<List<NamedExpression>> childrenProjections = setOperation.collectChildrenProjections();
ImmutableList.Builder<Plan> newChildren = ImmutableList.builder();
for (int i = 0; i < castExpressions.size(); i++) {
if (castExpressions.stream().allMatch(SlotReference.class::isInstance)) {
for (int i = 0; i < childrenProjections.size(); i++) {
if (childrenProjections.stream().allMatch(SlotReference.class::isInstance)) {
newChildren.add(setOperation.child(i));
} else {
List<NamedExpression> projections = castExpressions.get(i).stream()
.map(e -> {
if (e instanceof SlotReference) {
return (SlotReference) e;
} else {
return new Alias(e, e.toSql());
}
}).collect(ImmutableList.toImmutableList());
LogicalProject<Plan> logicalProject = new LogicalProject<>(projections,
setOperation.child(i));
newChildren.add(logicalProject);
newChildren.add(new LogicalProject<>(childrenProjections.get(i), setOperation.child(i)));
}
}
List<NamedExpression> newOutputs = setOperation.buildNewOutputs(castExpressions.get(0));
return setOperation.withNewOutputs(newOutputs).withChildren(newChildren.build());
setOperation = (LogicalSetOperation) setOperation.withChildren(newChildren.build());
List<NamedExpression> newOutputs = setOperation.buildNewOutputs();
return setOperation.withNewOutputs(newOutputs);
})
),
RuleType.BINDING_GENERATE_SLOT.build(
@ -618,7 +609,6 @@ public class BindExpression implements AnalysisRuleFactory {
.collect(Collectors.toList());
}
@SuppressWarnings("unchecked")
private <E extends Expression> E bindSlot(E expr, Plan input, CascadesContext cascadesContext) {
return bindSlot(expr, input, cascadesContext, true, true);
}
@ -700,7 +690,7 @@ public class BindExpression implements AnalysisRuleFactory {
if (!(function instanceof TableValuedFunction)) {
throw new AnalysisException(function.toSql() + " is not a TableValuedFunction");
}
return new LogicalTVFRelation(unboundTVFRelation.getId(), (TableValuedFunction) function);
return new LogicalTVFRelation(unboundTVFRelation.getRelationId(), (TableValuedFunction) function);
}
private void checkSameNameSlot(List<Slot> childOutputs, String subQueryAlias) {

View File

@ -64,20 +64,19 @@ import org.apache.commons.lang3.StringUtils;
import java.util.List;
import java.util.Optional;
import java.util.function.Function;
import javax.annotation.Nullable;
/**
* Rule to bind relations in query plan.
*/
public class BindRelation extends OneAnalysisRuleFactory {
private CustomTableResolver customTableResolver;
private final Optional<CustomTableResolver> customTableResolver;
public BindRelation() {
this(null);
this(Optional.empty());
}
public BindRelation(@Nullable CustomTableResolver customTableResolver) {
public BindRelation(Optional<CustomTableResolver> customTableResolver) {
this.customTableResolver = customTableResolver;
}
@ -122,13 +121,10 @@ public class BindRelation extends OneAnalysisRuleFactory {
// check if it is a CTE's name
CTEContext cteContext = cascadesContext.getCteContext().findCTEContext(tableName).orElse(null);
if (cteContext != null) {
Optional<LogicalPlan> analyzedCte = cteContext.getReuse(tableName);
Optional<LogicalPlan> analyzedCte = cteContext.getAnalyzedCTEPlan(tableName);
if (analyzedCte.isPresent()) {
LogicalCTEConsumer logicalCTEConsumer =
new LogicalCTEConsumer(Optional.empty(), Optional.empty(),
analyzedCte.get(), cteContext.getCteId(), tableName);
cascadesContext.putCTEIdToConsumer(logicalCTEConsumer);
return logicalCTEConsumer;
return new LogicalCTEConsumer(unboundRelation.getRelationId(),
cteContext.getCteId(), tableName, analyzedCte.get());
}
}
List<String> tableQualifier = RelationUtil.getQualifierName(cascadesContext.getConnectContext(),
@ -137,12 +133,14 @@ public class BindRelation extends OneAnalysisRuleFactory {
if (cascadesContext.getTables() != null) {
table = cascadesContext.getTableByName(tableName);
}
if (customTableResolver != null) {
table = customTableResolver.apply(tableQualifier);
}
if (table == null) {
// In some cases even if we have already called the "cascadesContext.getTableByName",
// it also gets the null. So, we just check it in the catalog again for safety.
if (customTableResolver.isPresent()) {
table = customTableResolver.get().apply(tableQualifier);
}
}
// In some cases even if we have already called the "cascadesContext.getTableByName",
// it also gets the null. So, we just check it in the catalog again for safety.
if (table == null) {
table = RelationUtil.getTable(tableQualifier, cascadesContext.getConnectContext().getEnv());
}
@ -154,9 +152,11 @@ public class BindRelation extends OneAnalysisRuleFactory {
List<String> tableQualifier = RelationUtil.getQualifierName(cascadesContext.getConnectContext(),
unboundRelation.getNameParts());
TableIf table = null;
if (customTableResolver != null) {
table = customTableResolver.apply(tableQualifier);
if (customTableResolver.isPresent()) {
table = customTableResolver.get().apply(tableQualifier);
}
// In some cases even if we have already called the "cascadesContext.getTableByName",
// it also gets the null. So, we just check it in the catalog again for safety.
if (table == null) {
table = RelationUtil.getTable(tableQualifier, cascadesContext.getConnectContext().getEnv());
}
@ -167,10 +167,10 @@ public class BindRelation extends OneAnalysisRuleFactory {
LogicalOlapScan scan;
List<Long> partIds = getPartitionIds(table, unboundRelation);
if (!CollectionUtils.isEmpty(partIds)) {
scan = new LogicalOlapScan(RelationUtil.newRelationId(),
scan = new LogicalOlapScan(unboundRelation.getRelationId(),
(OlapTable) table, ImmutableList.of(tableQualifier.get(1)), partIds, unboundRelation.getHints());
} else {
scan = new LogicalOlapScan(RelationUtil.newRelationId(),
scan = new LogicalOlapScan(unboundRelation.getRelationId(),
(OlapTable) table, ImmutableList.of(tableQualifier.get(1)), unboundRelation.getHints());
}
if (!Util.showHiddenColumns() && scan.getTable().hasDeleteSign()
@ -212,15 +212,16 @@ public class BindRelation extends OneAnalysisRuleFactory {
return new LogicalSubQueryAlias<>(tableQualifier, hiveViewPlan);
}
}
return new LogicalFileScan(RelationUtil.newRelationId(),
return new LogicalFileScan(unboundRelation.getRelationId(),
(HMSExternalTable) table, ImmutableList.of(dbName));
case SCHEMA:
return new LogicalSchemaScan(RelationUtil.newRelationId(), table, ImmutableList.of(dbName));
return new LogicalSchemaScan(unboundRelation.getRelationId(),
table, ImmutableList.of(dbName));
case JDBC_EXTERNAL_TABLE:
case JDBC:
return new LogicalJdbcScan(RelationUtil.newRelationId(), table, ImmutableList.of(dbName));
return new LogicalJdbcScan(unboundRelation.getRelationId(), table, ImmutableList.of(dbName));
case ES_EXTERNAL_TABLE:
return new LogicalEsScan(RelationUtil.newRelationId(),
return new LogicalEsScan(unboundRelation.getRelationId(),
(EsExternalTable) table, ImmutableList.of(dbName));
default:
throw new AnalysisException("Unsupported tableType:" + table.getType());
@ -241,7 +242,7 @@ public class BindRelation extends OneAnalysisRuleFactory {
private Plan parseAndAnalyzeView(String viewSql, CascadesContext parentContext) {
LogicalPlan parsedViewPlan = new NereidsParser().parseSingle(viewSql);
CascadesContext viewContext = CascadesContext.newRewriteContext(
CascadesContext viewContext = CascadesContext.initContext(
parentContext.getStatementContext(), parsedViewPlan, PhysicalProperties.ANY);
viewContext.newAnalyzer().analyze();

View File

@ -105,10 +105,14 @@ public class CheckAfterRewrite extends OneAnalysisRuleFactory {
.collect(Collectors.toSet());
notFromChildren = removeValidSlotsNotFromChildren(notFromChildren, childrenOutput);
if (!notFromChildren.isEmpty()) {
throw new AnalysisException(String.format("Input slot(s) not in child's output: %s in plan: %s",
throw new AnalysisException(String.format("Input slot(s) not in child's output: %s in plan: %s,"
+ " child output is: %s",
StringUtils.join(notFromChildren.stream()
.map(ExpressionTrait::toSql)
.collect(Collectors.toSet()), ", "), plan));
.map(ExpressionTrait::toString)
.collect(Collectors.toSet()), ", "), plan,
plan.children().stream()
.flatMap(child -> child.getOutput().stream())
.collect(Collectors.toSet())));
}
}

View File

@ -21,9 +21,6 @@
package org.apache.doris.nereids.rules.analysis;
import org.apache.doris.common.AliasGenerator;
import org.apache.doris.nereids.StatementContext;
import com.google.common.base.Preconditions;
/**
* Generate the table name required in the rewrite process.
@ -31,9 +28,7 @@ import com.google.common.base.Preconditions;
public class ColumnAliasGenerator extends AliasGenerator {
private static final String DEFAULT_COL_ALIAS_PREFIX = "$c$";
public ColumnAliasGenerator(StatementContext statementContext) {
Preconditions.checkNotNull(statementContext);
public ColumnAliasGenerator() {
aliasPrefix = DEFAULT_COL_ALIAS_PREFIX;
usedAliases.addAll(statementContext.getColumnNames());
}
}

View File

@ -19,6 +19,7 @@ package org.apache.doris.nereids.rules.analysis;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.rules.rewrite.OneRewriteRuleFactory;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
import com.google.common.collect.ImmutableList;
@ -29,7 +30,7 @@ import com.google.common.collect.ImmutableList;
* <p>
* TODO: refactor group merge strategy to support the feature above
*/
public class LogicalSubQueryAliasToLogicalProject extends OneAnalysisRuleFactory {
public class LogicalSubQueryAliasToLogicalProject extends OneRewriteRuleFactory {
@Override
public Rule build() {
return RuleType.LOGICAL_SUB_QUERY_ALIAS_TO_LOGICAL_PROJECT.build(

View File

@ -1,127 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.analysis;
import org.apache.doris.nereids.CTEContext;
import org.apache.doris.nereids.CascadesContext;
import org.apache.doris.nereids.exceptions.AnalysisException;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTE;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalSubQueryAlias;
import com.google.common.collect.ImmutableList;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.Callable;
/**
* Register CTE, includes checking columnAliases, checking CTE name, analyzing each CTE and store the
* analyzed logicalPlan of CTE's query in CTEContext;
* A LogicalProject node will be added to the root of the initial logicalPlan if there exist columnAliases.
* Node LogicalCTE will be eliminated after registering.
*/
public class RegisterCTE extends OneAnalysisRuleFactory {
@Override
public Rule build() {
return logicalCTE().whenNot(LogicalCTE::isRegistered).thenApply(ctx -> {
LogicalCTE<Plan> logicalCTE = ctx.root;
List<LogicalSubQueryAlias<Plan>> analyzedCTE = register(logicalCTE, ctx.cascadesContext);
return new LogicalCTE<>(analyzedCTE, logicalCTE.child(), true,
logicalCTE.getCteNameToId());
}).toRule(RuleType.REGISTER_CTE);
}
/**
* register and store CTEs in CTEContext
*/
private List<LogicalSubQueryAlias<Plan>> register(LogicalCTE<Plan> logicalCTE,
CascadesContext cascadesContext) {
CTEContext cteCtx = cascadesContext.getCteContext();
List<LogicalSubQueryAlias<Plan>> aliasQueries = logicalCTE.getAliasQueries();
List<LogicalSubQueryAlias<Plan>> analyzedCTE = new ArrayList<>();
for (LogicalSubQueryAlias<Plan> aliasQuery : aliasQueries) {
String cteName = aliasQuery.getAlias();
if (cteCtx.containsCTE(cteName)) {
throw new AnalysisException("CTE name [" + cteName + "] cannot be used more than once.");
}
// we should use a chain to ensure visible of cte
CTEContext localCteContext = cteCtx;
LogicalPlan parsedPlan = (LogicalPlan) aliasQuery.child();
CascadesContext localCascadesContext = CascadesContext.newRewriteContext(
cascadesContext.getStatementContext(), parsedPlan, localCteContext);
localCascadesContext.newAnalyzer().analyze();
LogicalPlan analyzedCteBody = (LogicalPlan) localCascadesContext.getRewritePlan();
cascadesContext.putAllCTEIdToConsumer(localCascadesContext.getCteIdToConsumers());
cascadesContext.putAllCTEIdToCTEClosure(localCascadesContext.getCteIdToCTEClosure());
if (aliasQuery.getColumnAliases().isPresent()) {
checkColumnAlias(aliasQuery, analyzedCteBody.getOutput());
}
CTEId cteId = logicalCTE.findCTEId(aliasQuery.getAlias());
cteCtx = new CTEContext(aliasQuery, localCteContext, cteId);
LogicalSubQueryAlias<Plan> logicalSubQueryAlias =
aliasQuery.withChildren(ImmutableList.of(analyzedCteBody));
cteCtx.setAnalyzedPlan(logicalSubQueryAlias);
Callable<LogicalPlan> cteClosure = () -> {
CascadesContext localCascadesContextInClosure = CascadesContext.newRewriteContext(
cascadesContext.getStatementContext(), aliasQuery, localCteContext);
localCascadesContextInClosure.newAnalyzer().analyze();
return (LogicalPlan) localCascadesContextInClosure.getRewritePlan();
};
cascadesContext.putCTEIdToCTEClosure(cteId, cteClosure);
analyzedCTE.add(logicalSubQueryAlias);
}
cascadesContext.setCteContext(cteCtx);
return analyzedCTE;
}
/**
* check columnAliases' size and name
*/
private void checkColumnAlias(LogicalSubQueryAlias<Plan> aliasQuery, List<Slot> outputSlots) {
List<String> columnAlias = aliasQuery.getColumnAliases().get();
// if the size of columnAlias is smaller than outputSlots' size, we will replace the corresponding number
// of front slots with columnAlias.
if (columnAlias.size() > outputSlots.size()) {
throw new AnalysisException("CTE [" + aliasQuery.getAlias() + "] returns " + columnAlias.size()
+ " columns, but " + outputSlots.size() + " labels were specified. The number of column labels must "
+ "be smaller or equal to the number of returned columns.");
}
Set<String> names = new HashSet<>();
// column alias cannot be used more than once
columnAlias.forEach(alias -> {
if (names.contains(alias.toLowerCase())) {
throw new AnalysisException("Duplicated CTE column alias: [" + alias.toLowerCase()
+ "] in CTE [" + aliasQuery.getAlias() + "]");
}
names.add(alias);
});
}
}

View File

@ -30,7 +30,6 @@ import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import com.google.common.base.Preconditions;
import org.apache.commons.lang3.StringUtils;
import java.util.List;
@ -70,15 +69,10 @@ class SlotBinder extends SubExprAnalyzer {
public Expression visitUnboundAlias(UnboundAlias unboundAlias, CascadesContext context) {
Expression child = unboundAlias.child().accept(this, context);
if (unboundAlias.getAlias().isPresent()) {
collectColumnNames(unboundAlias.getAlias().get());
return new Alias(child, unboundAlias.getAlias().get());
}
if (child instanceof NamedExpression) {
collectColumnNames(((NamedExpression) child).getName());
} else if (child instanceof NamedExpression) {
return new Alias(child, ((NamedExpression) child).getName());
} else {
// TODO: resolve aliases
collectColumnNames(child.toSql());
return new Alias(child, child.toSql());
}
}
@ -223,11 +217,4 @@ class SlotBinder extends SubExprAnalyzer {
+ StringUtils.join(nameParts, "."));
}).collect(Collectors.toList());
}
private void collectColumnNames(String columnName) {
Preconditions.checkNotNull(getCascadesContext());
if (!getCascadesContext().getStatementContext().getColumnNames().add(columnName)) {
throw new AnalysisException("Collect column name failed, columnName : " + columnName);
}
}
}

View File

@ -168,11 +168,11 @@ class SubExprAnalyzer extends DefaultExpressionRewriter<CascadesContext> {
}
private AnalyzedResult analyzeSubquery(SubqueryExpr expr) {
CascadesContext subqueryContext = CascadesContext.newRewriteContext(cascadesContext, expr.getQueryPlan());
CascadesContext subqueryContext = CascadesContext.newContextWithCteContext(
cascadesContext, expr.getQueryPlan(), cascadesContext.getCteContext());
Scope subqueryScope = genScopeWithSubquery(expr);
subqueryContext.setOuterScope(subqueryScope);
subqueryContext.newAnalyzer().analyze();
cascadesContext.putAllCTEIdToConsumer(subqueryContext.getCteIdToConsumers());
return new AnalyzedResult((LogicalPlan) subqueryContext.getRewritePlan(),
subqueryScope.getCorrelatedSlots());
}

View File

@ -23,50 +23,38 @@ import org.apache.doris.nereids.exceptions.AnalysisException;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalEsScan;
import org.apache.doris.nereids.trees.plans.logical.LogicalFileScan;
import org.apache.doris.nereids.trees.plans.logical.LogicalOlapScan;
import org.apache.doris.nereids.trees.plans.logical.LogicalRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalSchemaScan;
import org.apache.doris.nereids.trees.plans.algebra.CatalogRelation;
import org.apache.doris.qe.ConnectContext;
import com.google.common.collect.Sets;
import java.util.Set;
/**
* Check whether a user is permitted to scan specific tables.
*/
public class UserAuthentication extends OneAnalysisRuleFactory {
Set<Class<?>> relationsToCheck = Sets.newHashSet(LogicalOlapScan.class, LogicalEsScan.class,
LogicalFileScan.class, LogicalSchemaScan.class);
@Override
public Rule build() {
return logicalRelation()
.thenApply(ctx -> checkPermission(ctx.root, ctx.connectContext))
.when(CatalogRelation.class::isInstance)
.thenApply(ctx -> checkPermission((CatalogRelation) ctx.root, ctx.connectContext))
.toRule(RuleType.RELATION_AUTHENTICATION);
}
private Plan checkPermission(LogicalRelation relation, ConnectContext connectContext) {
private Plan checkPermission(CatalogRelation relation, ConnectContext connectContext) {
// do not check priv when replaying dump file
if (connectContext.getSessionVariable().isPlayNereidsDump()) {
return relation;
return null;
}
if (relationsToCheck.contains(relation.getClass())) {
String dbName =
!relation.getQualifier().isEmpty() ? relation.getQualifier().get(0) : null;
String tableName = relation.getTable().getName();
if (!connectContext.getEnv().getAccessManager().checkTblPriv(connectContext, dbName,
tableName, PrivPredicate.SELECT)) {
String message = ErrorCode.ERR_TABLEACCESS_DENIED_ERROR.formatErrorMsg("SELECT",
ConnectContext.get().getQualifiedUser(), ConnectContext.get().getRemoteIP(),
dbName + ": " + tableName);
throw new AnalysisException(message);
}
String dbName = relation.getDatabase().getFullName();
String tableName = relation.getTable().getName();
if (!connectContext.getEnv().getAccessManager().checkTblPriv(connectContext, dbName,
tableName, PrivPredicate.SELECT)) {
String message = ErrorCode.ERR_TABLEACCESS_DENIED_ERROR.formatErrorMsg("SELECT",
ConnectContext.get().getQualifiedUser(), ConnectContext.get().getRemoteIP(),
dbName + ": " + tableName);
throw new AnalysisException(message);
}
return relation;
return null;
}
}

View File

@ -111,7 +111,7 @@ public class ExpressionRewrite implements RewriteRuleFactory {
if (projects.equals(newProjects)) {
return oneRowRelation;
}
return new LogicalOneRowRelation(newProjects);
return new LogicalOneRowRelation(oneRowRelation.getRelationId(), newProjects);
}).toRule(RuleType.REWRITE_ONE_ROW_RELATION_EXPRESSION);
}
}

View File

@ -24,15 +24,16 @@ import org.apache.doris.nereids.trees.plans.physical.PhysicalCTEConsumer;
/**
* Implementation rule that convert logical CTE consumer to physical CTE consumer.
*/
public class LogicalCTEConsumeToPhysicalCTEConsume extends OneImplementationRuleFactory {
public class LogicalCTEConsumerToPhysicalCTEConsumer extends OneImplementationRuleFactory {
@Override
public Rule build() {
return logicalCTEConsumer().then(cte -> new PhysicalCTEConsumer(
cte.getCteId(),
cte.getConsumerToProducerOutputMap(),
cte.getProducerToConsumerOutputMap(),
cte.getLogicalProperties()
)
).toRule(RuleType.LOGICAL_CTE_CONSUME_TO_PHYSICAL_CTE_CONSUMER_RULE);
cte.getRelationId(),
cte.getCteId(),
cte.getConsumerToProducerOutputMap(),
cte.getProducerToConsumerOutputMap(),
cte.getLogicalProperties()
)
).toRule(RuleType.LOGICAL_CTE_CONSUMER_TO_PHYSICAL_CTE_CONSUMER_RULE);
}
}

View File

@ -24,14 +24,13 @@ import org.apache.doris.nereids.trees.plans.physical.PhysicalCTEProducer;
/**
* Implementation rule that convert logical CTE producer to physical CTE producer.
*/
public class LogicalCTEProduceToPhysicalCTEProduce extends OneImplementationRuleFactory {
public class LogicalCTEProducerToPhysicalCTEProducer extends OneImplementationRuleFactory {
@Override
public Rule build() {
return logicalCTEProducer().then(cte -> new PhysicalCTEProducer(
return logicalCTEProducer().then(cte -> new PhysicalCTEProducer<>(
cte.getCteId(),
cte.getProjects(),
cte.getLogicalProperties(),
cte.child())
).toRule(RuleType.LOGICAL_CTE_PRODUCE_TO_PHYSICAL_CTE_PRODUCER_RULE);
).toRule(RuleType.LOGICAL_CTE_PRODUCER_TO_PHYSICAL_CTE_PRODUCER_RULE);
}
}

View File

@ -28,7 +28,8 @@ public class LogicalEmptyRelationToPhysicalEmptyRelation extends OneImplementati
@Override
public Rule build() {
return logicalEmptyRelation()
.then(relation -> new PhysicalEmptyRelation(relation.getProjects(), relation.getLogicalProperties()))
.then(relation -> new PhysicalEmptyRelation(relation.getRelationId(),
relation.getProjects(), relation.getLogicalProperties()))
.toRule(RuleType.LOGICAL_EMPTY_RELATION_TO_PHYSICAL_EMPTY_RELATION_RULE);
}
}

View File

@ -32,7 +32,7 @@ public class LogicalEsScanToPhysicalEsScan extends OneImplementationRuleFactory
public Rule build() {
return logicalEsScan().then(esScan ->
new PhysicalEsScan(
esScan.getId(),
esScan.getRelationId(),
esScan.getTable(),
esScan.getQualifier(),
DistributionSpecAny.INSTANCE,

View File

@ -32,7 +32,7 @@ public class LogicalFileScanToPhysicalFileScan extends OneImplementationRuleFact
public Rule build() {
return logicalFileScan().then(fileScan ->
new PhysicalFileScan(
fileScan.getId(),
fileScan.getRelationId(),
fileScan.getTable(),
fileScan.getQualifier(),
DistributionSpecAny.INSTANCE,

View File

@ -17,7 +17,6 @@
package org.apache.doris.nereids.rules.implementation;
import org.apache.doris.nereids.properties.DistributionSpecAny;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.plans.physical.PhysicalJdbcScan;
@ -32,10 +31,9 @@ public class LogicalJdbcScanToPhysicalJdbcScan extends OneImplementationRuleFact
public Rule build() {
return logicalJdbcScan().then(jdbcScan ->
new PhysicalJdbcScan(
jdbcScan.getId(),
jdbcScan.getRelationId(),
jdbcScan.getTable(),
jdbcScan.getQualifier(),
DistributionSpecAny.INSTANCE,
Optional.empty(),
jdbcScan.getLogicalProperties())
).toRule(RuleType.LOGICAL_JDBC_SCAN_TO_PHYSICAL_JDBC_SCAN_RULE);

View File

@ -50,7 +50,7 @@ public class LogicalOlapScanToPhysicalOlapScan extends OneImplementationRuleFact
public Rule build() {
return logicalOlapScan().then(olapScan ->
new PhysicalOlapScan(
olapScan.getId(),
olapScan.getRelationId(),
olapScan.getTable(),
olapScan.getQualifier(),
olapScan.getSelectedIndexId(),

View File

@ -28,7 +28,7 @@ public class LogicalOneRowRelationToPhysicalOneRowRelation extends OneImplementa
@Override
public Rule build() {
return logicalOneRowRelation()
.then(relation -> new PhysicalOneRowRelation(
.then(relation -> new PhysicalOneRowRelation(relation.getRelationId(),
relation.getProjects(), relation.getLogicalProperties()))
.toRule(RuleType.LOGICAL_ONE_ROW_RELATION_TO_PHYSICAL_ONE_ROW_RELATION);
}

View File

@ -30,7 +30,7 @@ public class LogicalSchemaScanToPhysicalSchemaScan extends OneImplementationRule
@Override
public Rule build() {
return logicalSchemaScan().then(scan ->
new PhysicalSchemaScan(scan.getId(),
new PhysicalSchemaScan(scan.getRelationId(),
scan.getTable(),
scan.getQualifier(),
Optional.empty(),

View File

@ -28,7 +28,7 @@ public class LogicalTVFRelationToPhysicalTVFRelation extends OneImplementationRu
@Override
public Rule build() {
return logicalTVFRelation()
.then(relation -> new PhysicalTVFRelation(relation.getId(),
.then(relation -> new PhysicalTVFRelation(relation.getRelationId(),
relation.getFunction(), relation.getLogicalProperties()))
.toRule(RuleType.LOGICAL_TVF_RELATION_TO_PHYSICAL_TVF_RELATION);
}

View File

@ -15,16 +15,16 @@
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.analysis;
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.StatementContext;
import org.apache.doris.nereids.analyzer.UnboundOlapTableSink;
import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.trees.plans.LimitPhase;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTE;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalFileSink;
import org.apache.doris.nereids.trees.plans.logical.LogicalLimit;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalOlapTableSink;
import org.apache.doris.nereids.trees.plans.logical.LogicalSort;
import org.apache.doris.nereids.trees.plans.visitor.CustomRewriter;
import org.apache.doris.nereids.trees.plans.visitor.DefaultPlanRewriter;
@ -53,27 +53,35 @@ public class AddDefaultLimit extends DefaultPlanRewriter<StatementContext> imple
return plan;
}
// should add limit under anchor to keep optimize opportunity
@Override
public LogicalPlan visitLogicalLimit(LogicalLimit<? extends Plan> limit, StatementContext context) {
return limit;
}
@Override
public LogicalPlan visitLogicalCTE(LogicalCTE<? extends Plan> cte, StatementContext context) {
Plan child = cte.child().accept(this, context);
return ((LogicalPlan) cte.withChildren(child));
public Plan visitLogicalCTEAnchor(LogicalCTEAnchor<? extends Plan, ? extends Plan> cteAnchor,
StatementContext context) {
return cteAnchor.withChildren(cteAnchor.child(0), cteAnchor.child(1));
}
// we should keep that sink node is the top node of the plan tree.
// currently, it's one of the olap table sink and file sink.
@Override
public LogicalPlan visitUnboundOlapTableSink(UnboundOlapTableSink<? extends Plan> sink, StatementContext context) {
Plan child = sink.child().accept(this, context);
return ((LogicalPlan) sink.withChildren(child));
public Plan visitLogicalOlapTableSink(LogicalOlapTableSink<? extends Plan> olapTableSink,
StatementContext context) {
Plan child = olapTableSink.child().accept(this, context);
return olapTableSink.withChildren(child);
}
@Override
public LogicalPlan visitLogicalSort(LogicalSort<? extends Plan> sort, StatementContext context) {
public Plan visitLogicalFileSink(LogicalFileSink<? extends Plan> fileSink, StatementContext context) {
Plan child = fileSink.child().accept(this, context);
return fileSink.withChildren(child);
}
@Override
public Plan visitLogicalLimit(LogicalLimit<? extends Plan> limit, StatementContext context) {
return limit;
}
@Override
public Plan visitLogicalSort(LogicalSort<? extends Plan> sort, StatementContext context) {
ConnectContext ctx = context.getConnectContext();
if (ctx != null) {
long defaultLimit = ctx.getSessionVariable().defaultOrderByLimit;

View File

@ -23,6 +23,7 @@ import org.apache.doris.nereids.trees.expressions.Alias;
import org.apache.doris.nereids.trees.expressions.ExprId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.OrderExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.expressions.functions.ExpressionTrait;
@ -30,15 +31,18 @@ import org.apache.doris.nereids.trees.expressions.functions.Function;
import org.apache.doris.nereids.trees.expressions.visitor.DefaultExpressionRewriter;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalAggregate;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEConsumer;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
import org.apache.doris.nereids.trees.plans.logical.LogicalGenerate;
import org.apache.doris.nereids.trees.plans.logical.LogicalJoin;
import org.apache.doris.nereids.trees.plans.logical.LogicalPartitionTopN;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
import org.apache.doris.nereids.trees.plans.logical.LogicalRepeat;
import org.apache.doris.nereids.trees.plans.logical.LogicalSetOperation;
import org.apache.doris.nereids.trees.plans.logical.LogicalSort;
import org.apache.doris.nereids.trees.plans.logical.LogicalTopN;
import org.apache.doris.nereids.trees.plans.logical.LogicalUnion;
import org.apache.doris.nereids.trees.plans.logical.LogicalWindow;
import org.apache.doris.nereids.trees.plans.visitor.CustomRewriter;
import org.apache.doris.nereids.trees.plans.visitor.DefaultPlanRewriter;
@ -47,7 +51,9 @@ import org.apache.doris.nereids.util.ExpressionUtils;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
@ -57,70 +63,68 @@ import java.util.stream.Collectors;
* because some rule could change output's nullable.
* So, we need add a rule to adjust all expression's nullable attribute after rewrite.
*/
public class AdjustNullable extends DefaultPlanRewriter<Void> implements CustomRewriter {
public class AdjustNullable extends DefaultPlanRewriter<Map<ExprId, Slot>> implements CustomRewriter {
@Override
public Plan rewriteRoot(Plan plan, JobContext jobContext) {
return plan.accept(this, null);
return plan.accept(this, Maps.newHashMap());
}
@Override
public Plan visit(Plan plan, Void context) {
LogicalPlan logicalPlan = (LogicalPlan) super.visit(plan, context);
return logicalPlan.recomputeLogicalProperties();
public Plan visit(Plan plan, Map<ExprId, Slot> replaceMap) {
LogicalPlan logicalPlan = (LogicalPlan) super.visit(plan, replaceMap);
logicalPlan = logicalPlan.recomputeLogicalProperties();
logicalPlan.getOutputSet().forEach(s -> replaceMap.put(s.getExprId(), s));
return logicalPlan;
}
@Override
public Plan visitLogicalAggregate(LogicalAggregate<? extends Plan> aggregate, Void context) {
aggregate = (LogicalAggregate<? extends Plan>) super.visit(aggregate, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(aggregate);
public Plan visitLogicalAggregate(LogicalAggregate<? extends Plan> aggregate, Map<ExprId, Slot> replaceMap) {
aggregate = (LogicalAggregate<? extends Plan>) super.visit(aggregate, replaceMap);
List<NamedExpression> newOutputs
= updateExpressions(aggregate.getOutputExpressions(), exprIdSlotMap);
= updateExpressions(aggregate.getOutputExpressions(), replaceMap);
List<Expression> newGroupExpressions
= updateExpressions(aggregate.getGroupByExpressions(), exprIdSlotMap);
= updateExpressions(aggregate.getGroupByExpressions(), replaceMap);
newOutputs.forEach(o -> replaceMap.put(o.getExprId(), o.toSlot()));
return aggregate.withGroupByAndOutput(newGroupExpressions, newOutputs);
}
@Override
public Plan visitLogicalFilter(LogicalFilter<? extends Plan> filter, Void context) {
filter = (LogicalFilter<? extends Plan>) super.visit(filter, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(filter);
Set<Expression> conjuncts = updateExpressions(filter.getConjuncts(), exprIdSlotMap);
public Plan visitLogicalFilter(LogicalFilter<? extends Plan> filter, Map<ExprId, Slot> replaceMap) {
filter = (LogicalFilter<? extends Plan>) super.visit(filter, replaceMap);
Set<Expression> conjuncts = updateExpressions(filter.getConjuncts(), replaceMap);
return filter.withConjuncts(conjuncts).recomputeLogicalProperties();
}
@Override
public Plan visitLogicalGenerate(LogicalGenerate<? extends Plan> generate, Void context) {
generate = (LogicalGenerate<? extends Plan>) super.visit(generate, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(generate);
List<Function> newGenerators = updateExpressions(generate.getGenerators(), exprIdSlotMap);
return generate.withGenerators(newGenerators).recomputeLogicalProperties();
public Plan visitLogicalGenerate(LogicalGenerate<? extends Plan> generate, Map<ExprId, Slot> replaceMap) {
generate = (LogicalGenerate<? extends Plan>) super.visit(generate, replaceMap);
List<Function> newGenerators = updateExpressions(generate.getGenerators(), replaceMap);
Plan newGenerate = generate.withGenerators(newGenerators).recomputeLogicalProperties();
newGenerate.getOutputSet().forEach(o -> replaceMap.put(o.getExprId(), o));
return newGenerate;
}
@Override
public Plan visitLogicalJoin(LogicalJoin<? extends Plan, ? extends Plan> join, Void context) {
join = (LogicalJoin<? extends Plan, ? extends Plan>) super.visit(join, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(join);
List<Expression> hashConjuncts = updateExpressions(join.getHashJoinConjuncts(), exprIdSlotMap);
// because other join compute on join's output on be, so we need to change slot to join's output
exprIdSlotMap = join.getOutputSet().stream()
.collect(Collectors.toMap(NamedExpression::getExprId, s -> s));
List<Expression> otherConjuncts = updateExpressions(join.getOtherJoinConjuncts(), exprIdSlotMap);
public Plan visitLogicalJoin(LogicalJoin<? extends Plan, ? extends Plan> join, Map<ExprId, Slot> replaceMap) {
join = (LogicalJoin<? extends Plan, ? extends Plan>) super.visit(join, replaceMap);
List<Expression> hashConjuncts = updateExpressions(join.getHashJoinConjuncts(), replaceMap);
join.getOutputSet().forEach(o -> replaceMap.put(o.getExprId(), o));
List<Expression> otherConjuncts = updateExpressions(join.getOtherJoinConjuncts(), replaceMap);
return join.withJoinConjuncts(hashConjuncts, otherConjuncts).recomputeLogicalProperties();
}
@Override
public Plan visitLogicalProject(LogicalProject<? extends Plan> project, Void context) {
project = (LogicalProject<? extends Plan>) super.visit(project, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(project);
List<NamedExpression> newProjects = updateExpressions(project.getProjects(), exprIdSlotMap);
public Plan visitLogicalProject(LogicalProject<? extends Plan> project, Map<ExprId, Slot> replaceMap) {
project = (LogicalProject<? extends Plan>) super.visit(project, replaceMap);
List<NamedExpression> newProjects = updateExpressions(project.getProjects(), replaceMap);
newProjects.forEach(p -> replaceMap.put(p.getExprId(), p.toSlot()));
return project.withProjects(newProjects);
}
@Override
public Plan visitLogicalRepeat(LogicalRepeat<? extends Plan> repeat, Void context) {
repeat = (LogicalRepeat<? extends Plan>) super.visit(repeat, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(repeat);
public Plan visitLogicalRepeat(LogicalRepeat<? extends Plan> repeat, Map<ExprId, Slot> replaceMap) {
repeat = (LogicalRepeat<? extends Plan>) super.visit(repeat, replaceMap);
Set<Expression> flattenGroupingSetExpr = ImmutableSet.copyOf(
ExpressionUtils.flatExpressions(repeat.getGroupingSets()));
List<NamedExpression> newOutputs = Lists.newArrayList();
@ -128,15 +132,16 @@ public class AdjustNullable extends DefaultPlanRewriter<Void> implements CustomR
if (flattenGroupingSetExpr.contains(output)) {
newOutputs.add(output);
} else {
newOutputs.add(updateExpression(output, exprIdSlotMap));
newOutputs.add(updateExpression(output, replaceMap));
}
}
newOutputs.forEach(o -> replaceMap.put(o.getExprId(), o.toSlot()));
return repeat.withGroupSetsAndOutput(repeat.getGroupingSets(), newOutputs).recomputeLogicalProperties();
}
@Override
public Plan visitLogicalSetOperation(LogicalSetOperation setOperation, Void context) {
setOperation = (LogicalSetOperation) super.visit(setOperation, context);
public Plan visitLogicalSetOperation(LogicalSetOperation setOperation, Map<ExprId, Slot> replaceMap) {
setOperation = (LogicalSetOperation) super.visit(setOperation, replaceMap);
if (setOperation.children().isEmpty()) {
return setOperation;
}
@ -150,6 +155,16 @@ public class AdjustNullable extends DefaultPlanRewriter<Void> implements CustomR
}
}
}
if (setOperation instanceof LogicalUnion) {
LogicalUnion logicalUnion = (LogicalUnion) setOperation;
for (List<NamedExpression> constantExprs : logicalUnion.getConstantExprsList()) {
for (int j = 0; j < constantExprs.size(); j++) {
if (constantExprs.get(j).nullable()) {
inputNullable.set(j, true);
}
}
}
}
List<NamedExpression> outputs = setOperation.getOutputs();
List<NamedExpression> newOutputs = Lists.newArrayListWithCapacity(outputs.size());
for (int i = 0; i < inputNullable.size(); i++) {
@ -160,48 +175,71 @@ public class AdjustNullable extends DefaultPlanRewriter<Void> implements CustomR
}
newOutputs.add(ne instanceof Alias ? (NamedExpression) ne.withChildren(slot) : slot);
}
newOutputs.forEach(o -> replaceMap.put(o.getExprId(), o.toSlot()));
return setOperation.withNewOutputs(newOutputs).recomputeLogicalProperties();
}
@Override
public Plan visitLogicalSort(LogicalSort<? extends Plan> sort, Void context) {
sort = (LogicalSort<? extends Plan>) super.visit(sort, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(sort);
public Plan visitLogicalSort(LogicalSort<? extends Plan> sort, Map<ExprId, Slot> replaceMap) {
sort = (LogicalSort<? extends Plan>) super.visit(sort, replaceMap);
List<OrderKey> newKeys = sort.getOrderKeys().stream()
.map(old -> old.withExpression(updateExpression(old.getExpr(), exprIdSlotMap)))
.map(old -> old.withExpression(updateExpression(old.getExpr(), replaceMap)))
.collect(ImmutableList.toImmutableList());
return sort.withOrderKeys(newKeys).recomputeLogicalProperties();
}
@Override
public Plan visitLogicalTopN(LogicalTopN<? extends Plan> topN, Void context) {
topN = (LogicalTopN<? extends Plan>) super.visit(topN, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(topN);
public Plan visitLogicalTopN(LogicalTopN<? extends Plan> topN, Map<ExprId, Slot> replaceMap) {
topN = (LogicalTopN<? extends Plan>) super.visit(topN, replaceMap);
List<OrderKey> newKeys = topN.getOrderKeys().stream()
.map(old -> old.withExpression(updateExpression(old.getExpr(), exprIdSlotMap)))
.map(old -> old.withExpression(updateExpression(old.getExpr(), replaceMap)))
.collect(ImmutableList.toImmutableList());
return topN.withOrderKeys(newKeys).recomputeLogicalProperties();
}
@Override
public Plan visitLogicalWindow(LogicalWindow<? extends Plan> window, Void context) {
window = (LogicalWindow<? extends Plan>) super.visit(window, context);
Map<ExprId, Slot> exprIdSlotMap = collectChildrenOutputMap(window);
public Plan visitLogicalWindow(LogicalWindow<? extends Plan> window, Map<ExprId, Slot> replaceMap) {
window = (LogicalWindow<? extends Plan>) super.visit(window, replaceMap);
List<NamedExpression> windowExpressions =
updateExpressions(window.getWindowExpressions(), exprIdSlotMap);
updateExpressions(window.getWindowExpressions(), replaceMap);
windowExpressions.forEach(w -> replaceMap.put(w.getExprId(), w.toSlot()));
return window.withExpression(windowExpressions, window.child());
}
private <T extends Expression> T updateExpression(T input, Map<ExprId, Slot> exprIdSlotMap) {
return (T) input.rewriteDownShortCircuit(e -> e.accept(SlotReferenceReplacer.INSTANCE, exprIdSlotMap));
@Override
public Plan visitLogicalPartitionTopN(LogicalPartitionTopN<? extends Plan> partitionTopN,
Map<ExprId, Slot> replaceMap) {
partitionTopN = (LogicalPartitionTopN<? extends Plan>) super.visit(partitionTopN, replaceMap);
List<Expression> partitionKeys = updateExpressions(partitionTopN.getPartitionKeys(), replaceMap);
List<OrderExpression> orderKeys = updateExpressions(partitionTopN.getOrderKeys(), replaceMap);
return partitionTopN.withPartitionKeysAndOrderKeys(partitionKeys, orderKeys);
}
private <T extends Expression> List<T> updateExpressions(List<T> inputs, Map<ExprId, Slot> exprIdSlotMap) {
return inputs.stream().map(i -> updateExpression(i, exprIdSlotMap)).collect(ImmutableList.toImmutableList());
@Override
public Plan visitLogicalCTEConsumer(LogicalCTEConsumer cteConsumer, Map<ExprId, Slot> replaceMap) {
Map<Slot, Slot> consumerToProducerOutputMap = new LinkedHashMap<>();
Map<Slot, Slot> producerToConsumerOutputMap = new LinkedHashMap<>();
for (Slot producerOutputSlot : cteConsumer.getConsumerToProducerOutputMap().values()) {
Slot newProducerOutputSlot = updateExpression(producerOutputSlot, replaceMap);
Slot newConsumerOutputSlot = cteConsumer.getProducerToConsumerOutputMap().get(producerOutputSlot)
.withNullable(newProducerOutputSlot.nullable());
producerToConsumerOutputMap.put(newProducerOutputSlot, newConsumerOutputSlot);
consumerToProducerOutputMap.put(newConsumerOutputSlot, newProducerOutputSlot);
replaceMap.put(newConsumerOutputSlot.getExprId(), newConsumerOutputSlot);
}
return cteConsumer.withTwoMaps(consumerToProducerOutputMap, producerToConsumerOutputMap);
}
private <T extends Expression> Set<T> updateExpressions(Set<T> inputs, Map<ExprId, Slot> exprIdSlotMap) {
return inputs.stream().map(i -> updateExpression(i, exprIdSlotMap)).collect(ImmutableSet.toImmutableSet());
private <T extends Expression> T updateExpression(T input, Map<ExprId, Slot> replaceMap) {
return (T) input.rewriteDownShortCircuit(e -> e.accept(SlotReferenceReplacer.INSTANCE, replaceMap));
}
private <T extends Expression> List<T> updateExpressions(List<T> inputs, Map<ExprId, Slot> replaceMap) {
return inputs.stream().map(i -> updateExpression(i, replaceMap)).collect(ImmutableList.toImmutableList());
}
private <T extends Expression> Set<T> updateExpressions(Set<T> inputs, Map<ExprId, Slot> replaceMap) {
return inputs.stream().map(i -> updateExpression(i, replaceMap)).collect(ImmutableSet.toImmutableSet());
}
private Map<ExprId, Slot> collectChildrenOutputMap(LogicalPlan plan) {

View File

@ -32,6 +32,7 @@ import org.apache.doris.nereids.trees.expressions.functions.window.SupportWindow
import org.apache.doris.nereids.trees.expressions.literal.Literal;
import org.apache.doris.nereids.trees.expressions.visitor.DefaultExpressionVisitor;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.algebra.CatalogRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalAggregate;
import org.apache.doris.nereids.trees.plans.logical.LogicalApply;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
@ -248,11 +249,11 @@ public class AggScalarSubQueryToWindowFunction extends DefaultPlanRewriter<JobCo
* 3. the remaining table in step 2 should be correlated table for inner plan
*/
private boolean checkRelation(List<Expression> correlatedSlots) {
List<LogicalRelation> outerTables = outerPlans.stream().filter(LogicalRelation.class::isInstance)
.map(LogicalRelation.class::cast)
List<CatalogRelation> outerTables = outerPlans.stream().filter(CatalogRelation.class::isInstance)
.map(CatalogRelation.class::cast)
.collect(Collectors.toList());
List<LogicalRelation> innerTables = innerPlans.stream().filter(LogicalRelation.class::isInstance)
.map(LogicalRelation.class::cast)
List<CatalogRelation> innerTables = innerPlans.stream().filter(CatalogRelation.class::isInstance)
.map(CatalogRelation.class::cast)
.collect(Collectors.toList());
List<Long> outerIds = outerTables.stream().map(node -> node.getTable().getId()).collect(Collectors.toList());
@ -273,15 +274,16 @@ public class AggScalarSubQueryToWindowFunction extends DefaultPlanRewriter<JobCo
Set<ExprId> correlatedRelationOutput = outerTables.stream()
.filter(node -> outerIds.contains(node.getTable().getId()))
.map(LogicalRelation.class::cast)
.map(LogicalRelation::getOutputExprIdSet).flatMap(Collection::stream).collect(Collectors.toSet());
return ExpressionUtils.collect(correlatedSlots, NamedExpression.class::isInstance).stream()
.map(NamedExpression.class::cast)
.allMatch(e -> correlatedRelationOutput.contains(e.getExprId()));
}
private void createSlotMapping(List<LogicalRelation> outerTables, List<LogicalRelation> innerTables) {
for (LogicalRelation outerTable : outerTables) {
for (LogicalRelation innerTable : innerTables) {
private void createSlotMapping(List<CatalogRelation> outerTables, List<CatalogRelation> innerTables) {
for (CatalogRelation outerTable : outerTables) {
for (CatalogRelation innerTable : innerTables) {
if (innerTable.getTable().getId() == outerTable.getTable().getId()) {
for (Slot innerSlot : innerTable.getOutput()) {
for (Slot outerSlot : outerTable.getOutput()) {

View File

@ -1,67 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.CascadesContext;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTE;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEProducer;
import org.apache.doris.nereids.trees.plans.logical.LogicalEmptyRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalSubQueryAlias;
import org.apache.doris.qe.ConnectContext;
/**
* BuildCTEAnchorAndCTEProducer.
*/
public class BuildCTEAnchorAndCTEProducer extends OneRewriteRuleFactory {
@Override
public Rule build() {
return logicalCTE().thenApply(ctx -> {
return rewrite(ctx.root, ctx.cascadesContext);
}).toRule(RuleType.BUILD_CTE_ANCHOR_AND_CTE_PRODUCER);
}
@SuppressWarnings({"unchecked", "rawtypes"})
private LogicalPlan rewrite(LogicalPlan p, CascadesContext cascadesContext) {
if (!(p instanceof LogicalCTE)) {
return p;
}
LogicalCTE logicalCTE = (LogicalCTE) p;
LogicalPlan child = (LogicalPlan) logicalCTE.child();
if (!(child instanceof LogicalEmptyRelation)) {
for (int i = logicalCTE.getAliasQueries().size() - 1; i >= 0; i--) {
LogicalSubQueryAlias s = (LogicalSubQueryAlias) logicalCTE.getAliasQueries().get(i);
CTEId id = logicalCTE.findCTEId(s.getAlias());
if (cascadesContext.cteReferencedCount(id)
<= ConnectContext.get().getSessionVariable().inlineCTEReferencedThreshold
|| !ConnectContext.get().getSessionVariable().getEnablePipelineEngine()) {
continue;
}
LogicalCTEProducer logicalCTEProducer = new LogicalCTEProducer(
rewrite((LogicalPlan) s.child(), cascadesContext), id);
child = new LogicalCTEAnchor(logicalCTEProducer, child, id);
}
}
return child;
}
}

View File

@ -0,0 +1,112 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.trees.copier.DeepCopierContext;
import org.apache.doris.nereids.trees.copier.LogicalPlanDeepCopier;
import org.apache.doris.nereids.trees.expressions.Alias;
import org.apache.doris.nereids.trees.expressions.ExprId;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEConsumer;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEProducer;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
import org.apache.doris.nereids.trees.plans.visitor.CustomRewriter;
import org.apache.doris.nereids.trees.plans.visitor.DefaultPlanRewriter;
import org.apache.doris.qe.ConnectContext;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Lists;
import java.util.List;
/**
* pull up LogicalCteAnchor to the top of plan to avoid CteAnchor break other rewrite rules pattern
* The front producer may depend on the back producer in {@code List<LogicalCTEProducer<Plan>>}
* After this rule, we normalize all CteAnchor in plan, all CteAnchor under CteProducer should pull out
* and put all of them to the top of plan depends on dependency tree of them.
*/
public class CTEInline extends DefaultPlanRewriter<LogicalCTEProducer<?>> implements CustomRewriter {
@Override
public Plan rewriteRoot(Plan plan, JobContext jobContext) {
Plan root = plan.accept(this, null);
// collect cte id to consumer
root.foreach(p -> {
if (p instanceof LogicalCTEConsumer) {
jobContext.getCascadesContext().putCTEIdToConsumer(((LogicalCTEConsumer) p));
}
});
return root;
}
@Override
public Plan visitLogicalCTEAnchor(LogicalCTEAnchor<? extends Plan, ? extends Plan> cteAnchor,
LogicalCTEProducer<?> producer) {
if (producer != null) {
// process upper anchor
List<Plan> children = cteAnchor.children().stream()
.map(c -> c.accept(this, producer))
.collect(ImmutableList.toImmutableList());
return cteAnchor.withChildren(children);
} else {
// process this anchor
List<LogicalCTEConsumer> consumers = cteAnchor.child(1).collectToList(p -> {
if (p instanceof LogicalCTEConsumer) {
return ((LogicalCTEConsumer) p).getCteId().equals(cteAnchor.getCteId());
}
return false;
});
if (ConnectContext.get().getSessionVariable().getEnablePipelineEngine()
&& ConnectContext.get().getSessionVariable().enableCTEMaterialize
&& consumers.size() > ConnectContext.get().getSessionVariable().inlineCTEReferencedThreshold) {
// not inline
Plan right = cteAnchor.right().accept(this, null);
return cteAnchor.withChildren(cteAnchor.left(), right);
} else {
// should inline
Plan root = cteAnchor.right().accept(this, (LogicalCTEProducer<?>) cteAnchor.left());
// process child
return root.accept(this, null);
}
}
}
@Override
public Plan visitLogicalCTEConsumer(LogicalCTEConsumer cteConsumer, LogicalCTEProducer<?> producer) {
if (producer != null && cteConsumer.getCteId().equals(producer.getCteId())) {
DeepCopierContext deepCopierContext = new DeepCopierContext();
Plan inlinedPlan = LogicalPlanDeepCopier.INSTANCE
.deepCopy((LogicalPlan) producer.child(), deepCopierContext);
List<NamedExpression> projects = Lists.newArrayList();
for (Slot consumerSlot : cteConsumer.getOutput()) {
Slot producerSlot = cteConsumer.getProducerSlot(consumerSlot);
ExprId inlineExprId = deepCopierContext.exprIdReplaceMap.get(producerSlot.getExprId());
Alias alias = new Alias(consumerSlot.getExprId(), producerSlot.withExprId(inlineExprId),
consumerSlot.getName());
projects.add(alias);
}
return new LogicalProject<>(projects, inlinedPlan);
}
return cteConsumer;
}
}

View File

@ -1,122 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.CascadesContext;
import org.apache.doris.nereids.jobs.executor.Rewriter;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEConsumer;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEProducer;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
import org.apache.doris.nereids.util.ExpressionUtils;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import org.apache.commons.collections.CollectionUtils;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Map.Entry;
import java.util.Set;
import java.util.stream.Collectors;
/**
* Rewrite CTE Producer recursively.
*/
public class CTEProducerRewrite extends OneRewriteRuleFactory {
@Override
public Rule build() {
return logicalCTEProducer().when(p -> !p.isRewritten()).thenApply(ctx -> {
LogicalCTEProducer<? extends Plan> cteProducer = ctx.root;
Set<Expression> projects = ctx.cascadesContext.getProjectForProducer(cteProducer.getCteId());
LogicalPlan child = tryToConstructFilter(ctx.cascadesContext, cteProducer.getCteId(),
(LogicalPlan) ctx.root.child());
if (CollectionUtils.isNotEmpty(projects)
&& ctx.cascadesContext.couldPruneColumnOnProducer(cteProducer.getCteId())) {
child = new LogicalProject(ImmutableList.copyOf(projects), child);
}
CascadesContext rewrittenCtx = ctx.cascadesContext.forkForCTEProducer(child);
Rewriter rewriter = new Rewriter(rewrittenCtx);
rewriter.execute();
return cteProducer.withChildrenAndProjects(ImmutableList.of(rewrittenCtx.getRewritePlan()),
new ArrayList<>(child.getOutput()), true);
}).toRule(RuleType.CTE_PRODUCER_REWRITE);
}
/*
* An expression can only be pushed down if it has filter expressions on all consumers that reference the slot.
* For example, let's assume a producer has two consumers, consumer1 and consumer2:
*
* filter(a > 5 and b < 1) -> consumer1
* filter(a < 8) -> consumer2
*
* In this case, the only expression that can be pushed down to the producer is filter(a > 5 or a < 8).
*/
private LogicalPlan tryToConstructFilter(CascadesContext cascadesContext, CTEId cteId, LogicalPlan child) {
Set<Integer> consumerIds = cascadesContext.getCteIdToConsumers().get(cteId).stream()
.map(LogicalCTEConsumer::getConsumerId)
.collect(Collectors.toSet());
Set<Set<Expression>> filtersAboveEachConsumer = cascadesContext.getConsumerIdToFilters().entrySet().stream()
.filter(kv -> consumerIds.contains(kv.getKey()))
.map(Entry::getValue)
.collect(Collectors.toSet());
Set<Expression> someone = filtersAboveEachConsumer.stream().findFirst().orElse(null);
if (someone == null) {
return child;
}
int filterSize = cascadesContext.getCteIdToConsumers().get(cteId).size();
Set<Expression> filter = new HashSet<>();
for (Expression f : someone) {
int matchCount = 1;
Set<SlotReference> slots = f.collect(e -> e instanceof SlotReference);
Set<Expression> mightBeJoined = new HashSet<>();
for (Set<Expression> another : filtersAboveEachConsumer) {
if (another.equals(someone)) {
continue;
}
Set<Expression> matched = new HashSet<>();
for (Expression e : another) {
Set<SlotReference> otherSlots = e.collect(ae -> ae instanceof SlotReference);
if (otherSlots.equals(slots)) {
matched.add(e);
}
}
if (!matched.isEmpty()) {
matchCount++;
}
mightBeJoined.addAll(matched);
}
if (matchCount >= filterSize) {
mightBeJoined.add(f);
filter.add(ExpressionUtils.or(mightBeJoined));
}
}
if (!filter.isEmpty()) {
return new LogicalFilter(ImmutableSet.of(ExpressionUtils.and(filter)), child);
}
return child;
}
}

View File

@ -24,6 +24,7 @@ import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.Match;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.expressions.literal.Literal;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
import java.util.List;
@ -41,15 +42,15 @@ public class CheckMatchExpression extends OneRewriteRuleFactory {
.toRule(RuleType.CHECK_MATCH_EXPRESSION);
}
private LogicalFilter checkChildren(LogicalFilter filter) {
private Plan checkChildren(LogicalFilter<? extends Plan> filter) {
List<Expression> expressions = filter.getExpressions();
for (Expression expr : expressions) {
if (expr instanceof Match) {
Match matchExpression = (Match) expr;
if (!(matchExpression.left() instanceof SlotReference)
|| !(matchExpression.right() instanceof Literal)) {
throw new AnalysisException(String.format(
"Only support match left operand is SlotRef, right operand is Literal"));
throw new AnalysisException(String.format("Only support match left operand is SlotRef,"
+ " right operand is Literal. But meet expression %s", matchExpression));
}
}
}

View File

@ -40,11 +40,11 @@ public class CollectFilterAboveConsumer extends OneRewriteRuleFactory {
for (Expression expr : exprs) {
Expression rewrittenExpr = expr.rewriteUp(e -> {
if (e instanceof Slot) {
return cteConsumer.findProducerSlot((Slot) e);
return cteConsumer.getProducerSlot((Slot) e);
}
return e;
});
ctx.cascadesContext.putConsumerIdToFilter(cteConsumer.getConsumerId(), rewrittenExpr);
ctx.cascadesContext.putConsumerIdToFilter(cteConsumer.getRelationId(), rewrittenExpr);
}
return ctx.root;
}).toRule(RuleType.COLLECT_FILTER_ON_CONSUMER);

View File

@ -48,8 +48,8 @@ public class CollectProjectAboveConsumer implements RewriteRuleFactory {
collectProject(ctx.cascadesContext, namedExpressions, cteConsumer);
return ctx.root;
})),
RuleType.COLLECT_PROJECT_ABOVE_FILTER_CONSUMER.build(logicalProject(logicalFilter(logicalCTEConsumer()))
.thenApply(ctx -> {
RuleType.COLLECT_PROJECT_ABOVE_FILTER_CONSUMER
.build(logicalProject(logicalFilter(logicalCTEConsumer())).thenApply(ctx -> {
LogicalProject<LogicalFilter<LogicalCTEConsumer>> project = ctx.root;
LogicalFilter<LogicalCTEConsumer> filter = project.child();
Set<Slot> filterSlots = filter.getInputSlots();
@ -72,7 +72,7 @@ public class CollectProjectAboveConsumer implements RewriteRuleFactory {
if (!(node instanceof Slot)) {
return;
}
Slot slot = cteConsumer.findProducerSlot((Slot) node);
Slot slot = cteConsumer.getProducerSlot((Slot) node);
ctx.putCTEIdToProject(cteConsumer.getCteId(), slot);
ctx.markConsumerUnderProject(cteConsumer);
});

View File

@ -28,6 +28,7 @@ import org.apache.doris.nereids.trees.plans.algebra.SetOperation.Qualifier;
import org.apache.doris.nereids.trees.plans.logical.LogicalAggregate;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEProducer;
import org.apache.doris.nereids.trees.plans.logical.LogicalExcept;
import org.apache.doris.nereids.trees.plans.logical.LogicalFileSink;
import org.apache.doris.nereids.trees.plans.logical.LogicalIntersect;
import org.apache.doris.nereids.trees.plans.logical.LogicalOlapTableSink;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
@ -161,10 +162,15 @@ public class ColumnPruning extends DefaultPlanRewriter<PruneContext> implements
}
@Override
public Plan visitLogicalOlapTableSink(LogicalOlapTableSink olapTableSink, PruneContext context) {
public Plan visitLogicalOlapTableSink(LogicalOlapTableSink<? extends Plan> olapTableSink, PruneContext context) {
return skipPruneThisAndFirstLevelChildren(olapTableSink);
}
@Override
public Plan visitLogicalFileSink(LogicalFileSink<? extends Plan> fileSink, PruneContext context) {
return skipPruneThisAndFirstLevelChildren(fileSink);
}
// the backend not support filter(project(agg)), so we can not prune the key set in the agg,
// only prune the agg functions here
@Override

View File

@ -21,6 +21,7 @@ import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.literal.BooleanLiteral;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalEmptyRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
@ -36,11 +37,13 @@ public class EliminateFilter extends OneRewriteRuleFactory {
public Rule build() {
return logicalFilter()
.when(filter -> filter.getConjuncts().stream().anyMatch(BooleanLiteral.class::isInstance))
.then(filter -> {
.thenApply(ctx -> {
LogicalFilter<Plan> filter = ctx.root;
Set<Expression> newConjuncts = Sets.newHashSetWithExpectedSize(filter.getConjuncts().size());
for (Expression expression : filter.getConjuncts()) {
if (expression == BooleanLiteral.FALSE) {
return new LogicalEmptyRelation(filter.getOutput());
return new LogicalEmptyRelation(ctx.statementContext.getNextRelationId(),
filter.getOutput());
} else if (expression != BooleanLiteral.TRUE) {
newConjuncts.add(expression);
}

View File

@ -29,7 +29,8 @@ public class EliminateLimit extends OneRewriteRuleFactory {
public Rule build() {
return logicalLimit()
.when(limit -> limit.getLimit() == 0)
.then(limit -> new LogicalEmptyRelation(limit.getOutput()))
.thenApply(ctx -> new LogicalEmptyRelation(ctx.statementContext.getNextRelationId(),
ctx.root.getOutput()))
.toRule(RuleType.ELIMINATE_LIMIT);
}
}

View File

@ -19,6 +19,7 @@ package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.annotation.DependsRules;
import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.trees.expressions.StatementScopeIdGenerator;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalEmptyRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
@ -56,7 +57,7 @@ public class EliminateUnnecessaryProject implements CustomRewriter {
private Plan rewriteProject(LogicalProject<Plan> project, boolean outputSavePoint) {
if (project.child() instanceof LogicalEmptyRelation) {
// eliminate unnecessary project
return new LogicalEmptyRelation(project.getProjects());
return new LogicalEmptyRelation(StatementScopeIdGenerator.newRelationId(), project.getProjects());
} else if (project.canEliminate() && outputSavePoint
&& project.getOutputSet().equals(project.child().getOutputSet())) {
// eliminate unnecessary project

View File

@ -34,6 +34,7 @@ import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalAggregate;
import org.apache.doris.nereids.trees.plans.logical.LogicalApply;
import org.apache.doris.nereids.trees.plans.logical.LogicalJoin;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.util.ExpressionUtils;
import com.google.common.collect.ImmutableList;
@ -93,7 +94,9 @@ public class InApplyToJoin extends OneRewriteRuleFactory {
//in-predicate to equal
Expression predicate;
Expression left = ((InSubquery) apply.getSubqueryExpr()).getCompareExpr();
Expression right = apply.getSubqueryExpr().getSubqueryOutput();
// TODO: trick here, because when deep copy logical plan the apply right child
// is not same with query plan in subquery expr, since the scan node copy twice
Expression right = apply.getSubqueryExpr().getSubqueryOutput((LogicalPlan) apply.right());
if (apply.isCorrelated()) {
predicate = ExpressionUtils.and(new EqualTo(left, right),
apply.getCorrelationFilter().get());

View File

@ -1,73 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.expressions.Alias;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEConsumer;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
import org.apache.doris.qe.ConnectContext;
import java.util.ArrayList;
import java.util.List;
/**
* A CTEConsumer would be converted to a inlined plan if corresponding CTE referenced less than or
* equal inline_cte_referenced_threshold (it's a session variable, by default is 1).
*/
public class InlineCTE extends OneRewriteRuleFactory {
private static final int INLINE_CTE_REFERENCED_THRESHOLD = 1;
@Override
public Rule build() {
return logicalCTEConsumer().thenApply(ctx -> {
LogicalCTEConsumer cteConsumer = ctx.root;
int refCount = ctx.cascadesContext.cteReferencedCount(cteConsumer.getCteId());
/*
* Current we only implement CTE Materialize on pipeline engine and only materialize those CTE whose
* refCount > NereidsRewriter.INLINE_CTE_REFERENCED_THRESHOLD.
*/
if (ConnectContext.get().getSessionVariable().getEnablePipelineEngine()
&& ConnectContext.get().getSessionVariable().enableCTEMaterialize
&& refCount > INLINE_CTE_REFERENCED_THRESHOLD) {
return cteConsumer;
}
LogicalPlan inlinedPlan = ctx.cascadesContext.findCTEPlanForInline(cteConsumer.getCteId());
List<Slot> inlinedPlanOutput = inlinedPlan.getOutput();
List<Slot> cteConsumerOutput = cteConsumer.getOutput();
List<NamedExpression> projects = new ArrayList<>();
for (Slot inlineSlot : inlinedPlanOutput) {
String name = inlineSlot.getName();
for (Slot consumerSlot : cteConsumerOutput) {
if (consumerSlot.getName().equals(name)) {
Alias alias = new Alias(consumerSlot.getExprId(), inlineSlot, name);
projects.add(alias);
break;
}
}
}
return new LogicalProject<>(projects,
inlinedPlan);
}).toRule(RuleType.INLINE_CTE);
}
}

View File

@ -0,0 +1,90 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEProducer;
import org.apache.doris.nereids.trees.plans.logical.LogicalFileSink;
import org.apache.doris.nereids.trees.plans.logical.LogicalOlapTableSink;
import org.apache.doris.nereids.trees.plans.visitor.CustomRewriter;
import org.apache.doris.nereids.trees.plans.visitor.DefaultPlanRewriter;
import com.google.common.collect.Lists;
import java.util.List;
/**
* pull up LogicalCteAnchor to the top of plan to avoid CteAnchor break other rewrite rules pattern
* The front producer may depend on the back producer in {@code List<LogicalCTEProducer<Plan>>}
* After this rule, we normalize all CteAnchor in plan, all CteAnchor under CteProducer should pull out
* and put all of them to the top of plan depends on dependency tree of them.
*/
public class PullUpCteAnchor extends DefaultPlanRewriter<List<LogicalCTEProducer<Plan>>> implements CustomRewriter {
@Override
public Plan rewriteRoot(Plan plan, JobContext jobContext) {
List<LogicalCTEProducer<Plan>> producers = Lists.newArrayList();
return rewriteRoot(plan, producers);
}
private Plan rewriteRoot(Plan plan, List<LogicalCTEProducer<Plan>> producers) {
Plan root = plan.accept(this, producers);
for (LogicalCTEProducer<Plan> producer : producers) {
root = new LogicalCTEAnchor<>(producer.getCteId(), producer, root);
}
return root;
}
@Override
public Plan visitLogicalCTEAnchor(LogicalCTEAnchor<? extends Plan, ? extends Plan> cteAnchor,
List<LogicalCTEProducer<Plan>> producers) {
// 1. process child side
Plan root = cteAnchor.child(1).accept(this, producers);
// 2. process producers side, need to collect all producer
cteAnchor.child(0).accept(this, producers);
return root;
}
@Override
public LogicalCTEProducer<Plan> visitLogicalCTEProducer(LogicalCTEProducer<? extends Plan> cteProducer,
List<LogicalCTEProducer<Plan>> producers) {
List<LogicalCTEProducer<Plan>> childProducers = Lists.newArrayList();
Plan child = cteProducer.child().accept(this, childProducers);
LogicalCTEProducer<Plan> newProducer = (LogicalCTEProducer<Plan>) cteProducer.withChildren(child);
// because current producer relay on it child's producers, so add current producer first.
producers.add(newProducer);
producers.addAll(childProducers);
return newProducer;
}
// we should keep that sink node is the top node of the plan tree.
// currently, it's one of the olap table sink and file sink.
@Override
public Plan visitLogicalOlapTableSink(LogicalOlapTableSink<? extends Plan> olapTableSink,
List<LogicalCTEProducer<Plan>> producers) {
return olapTableSink.withChildren(rewriteRoot(olapTableSink.child(), producers));
}
@Override
public Plan visitLogicalFileSink(LogicalFileSink<? extends Plan> fileSink,
List<LogicalCTEProducer<Plan>> producers) {
return fileSink.withChildren(rewriteRoot(fileSink.child(), producers));
}
}

View File

@ -83,9 +83,12 @@ public class PushdownFilterThroughAggregation extends OneRewriteRuleFactory {
*/
public static Set<Slot> getCanPushDownSlots(LogicalAggregate<? extends Plan> aggregate) {
Set<Slot> canPushDownSlots = new HashSet<>();
if (aggregate.hasRepeat()) {
if (aggregate.getSourceRepeat().isPresent()) {
// When there is a repeat, the push-down condition is consistent with the repeat
canPushDownSlots.addAll(aggregate.getSourceRepeat().get().getCommonGroupingSetExpressions());
aggregate.getSourceRepeat().get().getCommonGroupingSetExpressions().stream()
.filter(Slot.class::isInstance)
.map(Slot.class::cast)
.forEach(canPushDownSlots::add);
} else {
for (Expression groupByExpression : aggregate.getGroupByExpressions()) {
if (groupByExpression instanceof Slot) {

View File

@ -1,39 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTE;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
/**
* Push filter through CTE.
*/
public class PushdownFilterThroughCTE extends OneRewriteRuleFactory {
@Override
public Rule build() {
return logicalFilter(logicalCTE()).thenApply(ctx -> {
LogicalFilter<LogicalCTE<Plan>> filter = ctx.root;
LogicalCTE<Plan> anchor = filter.child();
return anchor.withChildren(filter.withChildren(anchor.child()));
}).toRule(RuleType.PUSHDOWN_FILTER_THROUGH_CTE);
}
}

View File

@ -1,39 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
/**
* Push filter through CTEAnchor.
*/
public class PushdownFilterThroughCTEAnchor extends OneRewriteRuleFactory {
@Override
public Rule build() {
return logicalFilter(logicalCTEAnchor()).thenApply(ctx -> {
LogicalFilter<LogicalCTEAnchor<Plan, Plan>> filter = ctx.root;
LogicalCTEAnchor<Plan, Plan> anchor = filter.child();
return anchor.withChildren(anchor.left(), filter.withChildren((Plan) anchor.right()));
}).toRule(RuleType.PUSHDOWN_FILTER_THROUGH_CTE_ANCHOR);
}
}

View File

@ -31,6 +31,7 @@ import org.apache.doris.nereids.trees.expressions.WindowExpression;
import org.apache.doris.nereids.trees.expressions.literal.IntegerLikeLiteral;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalEmptyRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
import org.apache.doris.nereids.trees.plans.logical.LogicalPartitionTopN;
import org.apache.doris.nereids.trees.plans.logical.LogicalWindow;
@ -77,7 +78,8 @@ public class PushdownFilterThroughWindow extends OneRewriteRuleFactory {
@Override
public Rule build() {
return logicalFilter(logicalWindow()).then(filter -> {
return logicalFilter(logicalWindow()).thenApply(ctx -> {
LogicalFilter<LogicalWindow<Plan>> filter = ctx.root;
LogicalWindow<Plan> window = filter.child();
// We have already done such optimization rule, so just ignore it.
@ -117,7 +119,7 @@ public class PushdownFilterThroughWindow extends OneRewriteRuleFactory {
limitVal--;
}
if (limitVal < 0) {
return new LogicalEmptyRelation(filter.getOutput());
return new LogicalEmptyRelation(ctx.statementContext.getNextRelationId(), filter.getOutput());
}
if (hasPartitionLimit) {
partitionLimit = Math.min(partitionLimit, limitVal);

View File

@ -20,6 +20,7 @@ package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.UnaryNode;
import org.apache.doris.nereids.trees.expressions.StatementScopeIdGenerator;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.algebra.Limit;
import org.apache.doris.nereids.trees.plans.algebra.SetOperation.Qualifier;
@ -134,7 +135,8 @@ public class PushdownLimit implements RewriteRuleFactory {
}).toRule(RuleType.PUSH_LIMIT_INTO_SORT),
logicalLimit(logicalOneRowRelation())
.then(limit -> limit.getLimit() > 0 && limit.getOffset() == 0
? limit.child() : new LogicalEmptyRelation(limit.child().getOutput()))
? limit.child() : new LogicalEmptyRelation(StatementScopeIdGenerator.newRelationId(),
limit.child().getOutput()))
.toRule(RuleType.ELIMINATE_LIMIT_ON_ONE_ROW_RELATION),
logicalLimit(logicalEmptyRelation())
.then(UnaryNode::child)

View File

@ -1,39 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTE;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
/**
* Push project through CTE.
*/
public class PushdownProjectThroughCTE extends OneRewriteRuleFactory {
@Override
public Rule build() {
return logicalProject(logicalCTE()).thenApply(ctx -> {
LogicalProject<LogicalCTE<Plan>> project = ctx.root;
LogicalCTE<Plan> anchor = project.child();
return anchor.withChildren(project.withChildren(anchor.child()));
}).toRule(RuleType.PUSH_DOWN_PROJECT_THROUGH_CTE);
}
}

View File

@ -1,39 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.rules.Rule;
import org.apache.doris.nereids.rules.RuleType;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
/**
* Push project through CTEAnchor.
*/
public class PushdownProjectThroughCTEAnchor extends OneRewriteRuleFactory {
@Override
public Rule build() {
return logicalProject(logicalCTEAnchor()).thenApply(ctx -> {
LogicalProject<LogicalCTEAnchor<Plan, Plan>> project = ctx.root;
LogicalCTEAnchor<Plan, Plan> anchor = project.child();
return anchor.withChildren(anchor.child(0), project.withChildren(anchor.child(1)));
}).toRule(RuleType.PUSH_DOWN_PROJECT_THROUGH_CTE_ANCHOR);
}
}

View File

@ -87,7 +87,7 @@ public class ReorderJoin extends OneRewriteRuleFactory {
Plan plan = joinToMultiJoin(filter, planToHintType);
Preconditions.checkState(plan instanceof MultiJoin);
MultiJoin multiJoin = (MultiJoin) plan;
ctx.statementContext.setMaxNArayInnerJoin(multiJoin.children().size());
ctx.statementContext.setMaxNAryInnerJoin(multiJoin.children().size());
Plan after = multiJoinToJoin(multiJoin, planToHintType);
return after;
}).toRule(RuleType.REORDER_JOIN);

View File

@ -0,0 +1,189 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.rules.rewrite;
import org.apache.doris.nereids.CascadesContext;
import org.apache.doris.nereids.annotation.DependsRules;
import org.apache.doris.nereids.jobs.JobContext;
import org.apache.doris.nereids.jobs.executor.Rewriter;
import org.apache.doris.nereids.jobs.rewrite.RewriteJob;
import org.apache.doris.nereids.properties.PhysicalProperties;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEConsumer;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEProducer;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
import org.apache.doris.nereids.trees.plans.visitor.CustomRewriter;
import org.apache.doris.nereids.trees.plans.visitor.DefaultPlanRewriter;
import org.apache.doris.nereids.util.ExpressionUtils;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import org.apache.commons.collections.CollectionUtils;
import java.util.HashSet;
import java.util.List;
import java.util.Map.Entry;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;
/**
* rewrite CteAnchor consumer side and producer side recursively, all CteAnchor must at top of the plan
*/
@DependsRules({PullUpCteAnchor.class, CTEInline.class})
public class RewriteCteChildren extends DefaultPlanRewriter<CascadesContext> implements CustomRewriter {
private final List<RewriteJob> jobs;
public RewriteCteChildren(List<RewriteJob> jobs) {
this.jobs = jobs;
}
@Override
public Plan rewriteRoot(Plan plan, JobContext jobContext) {
return plan.accept(this, jobContext.getCascadesContext());
}
@Override
public Plan visit(Plan plan, CascadesContext context) {
Rewriter.getCteChildrenRewriter(context, jobs).execute();
return context.getRewritePlan();
}
@Override
public Plan visitLogicalCTEAnchor(LogicalCTEAnchor<? extends Plan, ? extends Plan> cteAnchor,
CascadesContext cascadesContext) {
LogicalPlan outer;
if (cascadesContext.getStatementContext().getRewrittenCtePlan().containsKey(null)) {
outer = cascadesContext.getStatementContext().getRewrittenCtePlan().get(null);
} else {
CascadesContext outerCascadesCtx = CascadesContext.newSubtreeContext(
Optional.empty(), cascadesContext, cteAnchor.child(1),
cascadesContext.getCurrentJobContext().getRequiredProperties());
outer = (LogicalPlan) cteAnchor.child(1).accept(this, outerCascadesCtx);
cascadesContext.getStatementContext().getRewrittenCtePlan().put(null, outer);
}
boolean reserveAnchor = outer.anyMatch(p -> {
if (p instanceof LogicalCTEConsumer) {
LogicalCTEConsumer logicalCTEConsumer = (LogicalCTEConsumer) p;
return logicalCTEConsumer.getCteId().equals(cteAnchor.getCteId());
}
return false;
});
if (!reserveAnchor) {
return outer;
}
Plan producer = cteAnchor.child(0).accept(this, cascadesContext);
return cteAnchor.withChildren(producer, outer);
}
@Override
public Plan visitLogicalCTEProducer(LogicalCTEProducer<? extends Plan> cteProducer,
CascadesContext cascadesContext) {
LogicalPlan child;
if (cascadesContext.getStatementContext().getRewrittenCtePlan().containsKey(cteProducer.getCteId())) {
child = cascadesContext.getStatementContext().getRewrittenCtePlan().get(cteProducer.getCteId());
} else {
child = (LogicalPlan) cteProducer.child();
child = tryToConstructFilter(cascadesContext, cteProducer.getCteId(), child);
Set<NamedExpression> projects = cascadesContext.getProjectForProducer(cteProducer.getCteId());
if (CollectionUtils.isNotEmpty(projects)
&& cascadesContext.couldPruneColumnOnProducer(cteProducer.getCteId())) {
child = new LogicalProject<>(ImmutableList.copyOf(projects), child);
child = pushPlanUnderAnchor(child);
}
CascadesContext rewrittenCtx = CascadesContext.newSubtreeContext(
Optional.of(cteProducer.getCteId()), cascadesContext, child, PhysicalProperties.ANY);
child = (LogicalPlan) child.accept(this, rewrittenCtx);
cascadesContext.getStatementContext().getRewrittenCtePlan().put(cteProducer.getCteId(), child);
}
return cteProducer.withChildren(child);
}
private LogicalPlan pushPlanUnderAnchor(LogicalPlan plan) {
if (plan.child(0) instanceof LogicalCTEAnchor) {
LogicalPlan child = (LogicalPlan) plan.withChildren(plan.child(0).child(1));
return (LogicalPlan) plan.child(0).withChildren(
plan.child(0).child(0), pushPlanUnderAnchor(child));
}
return plan;
}
/*
* An expression can only be pushed down if it has filter expressions on all consumers that reference the slot.
* For example, let's assume a producer has two consumers, consumer1 and consumer2:
*
* filter(a > 5 and b < 1) -> consumer1
* filter(a < 8) -> consumer2
*
* In this case, the only expression that can be pushed down to the producer is filter(a > 5 or a < 8).
*/
private LogicalPlan tryToConstructFilter(CascadesContext cascadesContext, CTEId cteId, LogicalPlan child) {
Set<RelationId> consumerIds = cascadesContext.getCteIdToConsumers().get(cteId).stream()
.map(LogicalCTEConsumer::getRelationId)
.collect(Collectors.toSet());
Set<Set<Expression>> filtersAboveEachConsumer = cascadesContext.getConsumerIdToFilters().entrySet().stream()
.filter(kv -> consumerIds.contains(kv.getKey()))
.map(Entry::getValue)
.collect(Collectors.toSet());
Set<Expression> someone = filtersAboveEachConsumer.stream().findFirst().orElse(null);
if (someone == null) {
return child;
}
int filterSize = cascadesContext.getCteIdToConsumers().get(cteId).size();
Set<Expression> conjuncts = new HashSet<>();
for (Expression f : someone) {
int matchCount = 1;
Set<SlotReference> slots = f.collect(e -> e instanceof SlotReference);
Set<Expression> mightBeJoined = new HashSet<>();
for (Set<Expression> another : filtersAboveEachConsumer) {
if (another.equals(someone)) {
continue;
}
Set<Expression> matched = new HashSet<>();
for (Expression e : another) {
Set<SlotReference> otherSlots = e.collect(ae -> ae instanceof SlotReference);
if (otherSlots.equals(slots)) {
matched.add(e);
}
}
if (!matched.isEmpty()) {
matchCount++;
}
mightBeJoined.addAll(matched);
}
if (matchCount >= filterSize) {
mightBeJoined.add(f);
conjuncts.add(ExpressionUtils.or(mightBeJoined));
}
}
if (!conjuncts.isEmpty()) {
LogicalPlan filter = new LogicalFilter<>(ImmutableSet.of(ExpressionUtils.and(conjuncts)), child);
return pushPlanUnderAnchor(filter);
}
return child;
}
}

View File

@ -39,6 +39,7 @@ import org.apache.doris.nereids.trees.expressions.functions.agg.AggregateFunctio
import org.apache.doris.nereids.trees.expressions.functions.window.Rank;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.algebra.Aggregate;
import org.apache.doris.nereids.trees.plans.algebra.CatalogRelation;
import org.apache.doris.nereids.trees.plans.algebra.EmptyRelation;
import org.apache.doris.nereids.trees.plans.algebra.Filter;
import org.apache.doris.nereids.trees.plans.algebra.Generate;
@ -46,7 +47,6 @@ import org.apache.doris.nereids.trees.plans.algebra.Limit;
import org.apache.doris.nereids.trees.plans.algebra.PartitionTopN;
import org.apache.doris.nereids.trees.plans.algebra.Project;
import org.apache.doris.nereids.trees.plans.algebra.Repeat;
import org.apache.doris.nereids.trees.plans.algebra.Scan;
import org.apache.doris.nereids.trees.plans.algebra.SetOperation;
import org.apache.doris.nereids.trees.plans.algebra.TopN;
import org.apache.doris.nereids.trees.plans.algebra.Union;
@ -214,7 +214,7 @@ public class StatsCalculator extends DefaultPlanVisitor<Statistics, Void> {
// For unit test only
public static void estimate(GroupExpression groupExpression, CascadesContext context) {
StatsCalculator statsCalculator = new StatsCalculator(groupExpression, false,
new HashMap<>(), false, Collections.EMPTY_MAP, context);
new HashMap<>(), false, Collections.emptyMap(), context);
statsCalculator.estimate();
}
@ -287,18 +287,18 @@ public class StatsCalculator extends DefaultPlanVisitor<Statistics, Void> {
@Override
public Statistics visitLogicalOlapScan(LogicalOlapScan olapScan, Void context) {
return computeScan(olapScan);
return computeCatalogRelation(olapScan);
}
@Override
public Statistics visitLogicalSchemaScan(LogicalSchemaScan schemaScan, Void context) {
return computeScan(schemaScan);
return computeCatalogRelation(schemaScan);
}
@Override
public Statistics visitLogicalFileScan(LogicalFileScan fileScan, Void context) {
fileScan.getExpressions();
return computeScan(fileScan);
return computeCatalogRelation(fileScan);
}
@Override
@ -309,13 +309,13 @@ public class StatsCalculator extends DefaultPlanVisitor<Statistics, Void> {
@Override
public Statistics visitLogicalJdbcScan(LogicalJdbcScan jdbcScan, Void context) {
jdbcScan.getExpressions();
return computeScan(jdbcScan);
return computeCatalogRelation(jdbcScan);
}
@Override
public Statistics visitLogicalEsScan(LogicalEsScan esScan, Void context) {
esScan.getExpressions();
return computeScan(esScan);
return computeCatalogRelation(esScan);
}
@Override
@ -419,17 +419,17 @@ public class StatsCalculator extends DefaultPlanVisitor<Statistics, Void> {
@Override
public Statistics visitPhysicalOlapScan(PhysicalOlapScan olapScan, Void context) {
return computeScan(olapScan);
return computeCatalogRelation(olapScan);
}
@Override
public Statistics visitPhysicalSchemaScan(PhysicalSchemaScan schemaScan, Void context) {
return computeScan(schemaScan);
return computeCatalogRelation(schemaScan);
}
@Override
public Statistics visitPhysicalFileScan(PhysicalFileScan fileScan, Void context) {
return computeScan(fileScan);
return computeCatalogRelation(fileScan);
}
@Override
@ -445,12 +445,12 @@ public class StatsCalculator extends DefaultPlanVisitor<Statistics, Void> {
@Override
public Statistics visitPhysicalJdbcScan(PhysicalJdbcScan jdbcScan, Void context) {
return computeScan(jdbcScan);
return computeCatalogRelation(jdbcScan);
}
@Override
public Statistics visitPhysicalEsScan(PhysicalEsScan esScan, Void context) {
return computeScan(esScan);
return computeCatalogRelation(esScan);
}
@Override
@ -585,12 +585,12 @@ public class StatsCalculator extends DefaultPlanVisitor<Statistics, Void> {
// TODO: 1. Subtract the pruned partition
// 2. Consider the influence of runtime filter
// 3. Get NDV and column data size from StatisticManger, StatisticManager doesn't support it now.
private Statistics computeScan(Scan scan) {
Set<SlotReference> slotSet = scan.getOutput().stream().filter(SlotReference.class::isInstance)
private Statistics computeCatalogRelation(CatalogRelation catalogRelation) {
Set<SlotReference> slotSet = catalogRelation.getOutput().stream().filter(SlotReference.class::isInstance)
.map(s -> (SlotReference) s).collect(Collectors.toSet());
Map<Expression, ColumnStatistic> columnStatisticMap = new HashMap<>();
TableIf table = scan.getTable();
double rowCount = scan.getTable().estimatedRowCount();
TableIf table = catalogRelation.getTable();
double rowCount = catalogRelation.getTable().estimatedRowCount();
for (SlotReference slotReference : slotSet) {
String colName = slotReference.getName();
if (colName == null) {
@ -1022,7 +1022,7 @@ public class StatsCalculator extends DefaultPlanVisitor<Statistics, Void> {
Preconditions.checkArgument(prodStats != null, String.format("Stats for CTE: %s not found", cteId));
Statistics consumerStats = new Statistics(prodStats.getRowCount(), new HashMap<>());
for (Slot slot : cteConsumer.getOutput()) {
Slot prodSlot = cteConsumer.findProducerSlot(slot);
Slot prodSlot = cteConsumer.getProducerSlot(slot);
ColumnStatistic colStats = prodStats.columnStatistics().get(prodSlot);
if (colStats == null) {
continue;
@ -1058,7 +1058,7 @@ public class StatsCalculator extends DefaultPlanVisitor<Statistics, Void> {
Preconditions.checkArgument(prodStats != null, String.format("Stats for CTE: %s not found", cteId));
Statistics consumerStats = new Statistics(prodStats.getRowCount(), new HashMap<>());
for (Slot slot : cteConsumer.getOutput()) {
Slot prodSlot = cteConsumer.findProducerSlot(slot);
Slot prodSlot = cteConsumer.getProducerSlot(slot);
ColumnStatistic colStats = prodStats.columnStatistics().get(prodSlot);
if (colStats == null) {
continue;

View File

@ -0,0 +1,50 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.trees.copier;
import org.apache.doris.nereids.trees.expressions.ExprId;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.logical.LogicalRelation;
import com.google.common.collect.Maps;
import java.util.Map;
/**
* context info used in LogicalPlan deep copy
*/
public class DeepCopierContext {
/**
* the original SlotReference to new SlotReference map
*/
public final Map<ExprId, ExprId> exprIdReplaceMap = Maps.newHashMap();
/**
* because LogicalApply keep original plan in itself and its right child in the meantime
* so, we must use exact same output (same ExprIds) relations between the two plan tree
* to ensure they keep same after deep copy
*/
private final Map<RelationId, LogicalRelation> relationReplaceMap = Maps.newHashMap();
public void putRelation(RelationId relationId, LogicalRelation newRelation) {
relationReplaceMap.put(relationId, newRelation);
}
public Map<RelationId, LogicalRelation> getRelationReplaceMap() {
return relationReplaceMap;
}
}

View File

@ -0,0 +1,122 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.trees.copier;
import org.apache.doris.nereids.trees.expressions.Alias;
import org.apache.doris.nereids.trees.expressions.Exists;
import org.apache.doris.nereids.trees.expressions.ExprId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.InSubquery;
import org.apache.doris.nereids.trees.expressions.ListQuery;
import org.apache.doris.nereids.trees.expressions.ScalarSubquery;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.expressions.visitor.DefaultExpressionRewriter;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collectors;
/**
* deep copy expression, generate new expr id for SlotReference and Alias.
*/
public class ExpressionDeepCopier extends DefaultExpressionRewriter<DeepCopierContext> {
public static ExpressionDeepCopier INSTANCE = new ExpressionDeepCopier();
public Expression deepCopy(Expression expression, DeepCopierContext context) {
return expression.accept(this, context);
}
@Override
public Expression visitAlias(Alias alias, DeepCopierContext context) {
Expression child = alias.child().accept(this, context);
Map<ExprId, ExprId> exprIdReplaceMap = context.exprIdReplaceMap;
Alias newOne;
if (exprIdReplaceMap.containsKey(alias.getExprId())) {
// NOTICE: because we do not do normalize agg, so we could get same Alias in more than one place
// so, if we already copy this Alias once, we must use the existed ExprId for this Alias.
newOne = new Alias(exprIdReplaceMap.get(alias.getExprId()), child, alias.getName());
} else {
newOne = new Alias(child, alias.getName());
exprIdReplaceMap.put(alias.getExprId(), newOne.getExprId());
}
return newOne;
}
@Override
public Expression visitSlotReference(SlotReference slotReference, DeepCopierContext context) {
Map<ExprId, ExprId> exprIdReplaceMap = context.exprIdReplaceMap;
if (exprIdReplaceMap.containsKey(slotReference.getExprId())) {
ExprId newExprId = exprIdReplaceMap.get(slotReference.getExprId());
return slotReference.withExprId(newExprId);
} else {
SlotReference newOne = new SlotReference(slotReference.getName(), slotReference.getDataType(),
slotReference.nullable(), slotReference.getQualifier());
exprIdReplaceMap.put(slotReference.getExprId(), newOne.getExprId());
return newOne;
}
}
@Override
public Expression visitExistsSubquery(Exists exists, DeepCopierContext context) {
LogicalPlan logicalPlan = LogicalPlanDeepCopier.INSTANCE.deepCopy(exists.getQueryPlan(), context);
List<Slot> correlateSlots = exists.getCorrelateSlots().stream()
.map(s -> (Slot) s.accept(this, context))
.collect(Collectors.toList());
Optional<Expression> typeCoercionExpr = exists.getTypeCoercionExpr()
.map(c -> c.accept(this, context));
return new Exists(logicalPlan, correlateSlots, typeCoercionExpr, exists.isNot());
}
@Override
public Expression visitListQuery(ListQuery listQuery, DeepCopierContext context) {
LogicalPlan logicalPlan = LogicalPlanDeepCopier.INSTANCE.deepCopy(listQuery.getQueryPlan(), context);
List<Slot> correlateSlots = listQuery.getCorrelateSlots().stream()
.map(s -> (Slot) s.accept(this, context))
.collect(Collectors.toList());
Optional<Expression> typeCoercionExpr = listQuery.getTypeCoercionExpr()
.map(c -> c.accept(this, context));
return new ListQuery(logicalPlan, correlateSlots, typeCoercionExpr);
}
@Override
public Expression visitInSubquery(InSubquery in, DeepCopierContext context) {
Expression compareExpr = in.getCompareExpr().accept(this, context);
List<Slot> correlateSlots = in.getCorrelateSlots().stream()
.map(s -> (Slot) s.accept(this, context))
.collect(Collectors.toList());
Optional<Expression> typeCoercionExpr = in.getTypeCoercionExpr()
.map(c -> c.accept(this, context));
ListQuery listQuery = (ListQuery) in.getListQuery().accept(this, context);
return new InSubquery(compareExpr, listQuery, correlateSlots, typeCoercionExpr, in.isNot());
}
@Override
public Expression visitScalarSubquery(ScalarSubquery scalar, DeepCopierContext context) {
LogicalPlan logicalPlan = LogicalPlanDeepCopier.INSTANCE.deepCopy(scalar.getQueryPlan(), context);
List<Slot> correlateSlots = scalar.getCorrelateSlots().stream()
.map(s -> (Slot) s.accept(this, context))
.collect(Collectors.toList());
Optional<Expression> typeCoercionExpr = scalar.getTypeCoercionExpr()
.map(c -> c.accept(this, context));
return new ScalarSubquery(logicalPlan, correlateSlots, typeCoercionExpr);
}
}

View File

@ -0,0 +1,429 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.trees.copier;
import org.apache.doris.nereids.exceptions.AnalysisException;
import org.apache.doris.nereids.properties.OrderKey;
import org.apache.doris.nereids.trees.expressions.ExprId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.MarkJoinSlotReference;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.OrderExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.StatementScopeIdGenerator;
import org.apache.doris.nereids.trees.expressions.SubqueryExpr;
import org.apache.doris.nereids.trees.expressions.functions.Function;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.logical.LogicalAggregate;
import org.apache.doris.nereids.trees.plans.logical.LogicalApply;
import org.apache.doris.nereids.trees.plans.logical.LogicalAssertNumRows;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEAnchor;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEConsumer;
import org.apache.doris.nereids.trees.plans.logical.LogicalCTEProducer;
import org.apache.doris.nereids.trees.plans.logical.LogicalEmptyRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalEsScan;
import org.apache.doris.nereids.trees.plans.logical.LogicalExcept;
import org.apache.doris.nereids.trees.plans.logical.LogicalFileScan;
import org.apache.doris.nereids.trees.plans.logical.LogicalFileSink;
import org.apache.doris.nereids.trees.plans.logical.LogicalFilter;
import org.apache.doris.nereids.trees.plans.logical.LogicalGenerate;
import org.apache.doris.nereids.trees.plans.logical.LogicalHaving;
import org.apache.doris.nereids.trees.plans.logical.LogicalIntersect;
import org.apache.doris.nereids.trees.plans.logical.LogicalJdbcScan;
import org.apache.doris.nereids.trees.plans.logical.LogicalJoin;
import org.apache.doris.nereids.trees.plans.logical.LogicalLimit;
import org.apache.doris.nereids.trees.plans.logical.LogicalOlapScan;
import org.apache.doris.nereids.trees.plans.logical.LogicalOlapTableSink;
import org.apache.doris.nereids.trees.plans.logical.LogicalOneRowRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalPartitionTopN;
import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
import org.apache.doris.nereids.trees.plans.logical.LogicalProject;
import org.apache.doris.nereids.trees.plans.logical.LogicalRepeat;
import org.apache.doris.nereids.trees.plans.logical.LogicalSchemaScan;
import org.apache.doris.nereids.trees.plans.logical.LogicalSort;
import org.apache.doris.nereids.trees.plans.logical.LogicalTVFRelation;
import org.apache.doris.nereids.trees.plans.logical.LogicalTopN;
import org.apache.doris.nereids.trees.plans.logical.LogicalUnion;
import org.apache.doris.nereids.trees.plans.logical.LogicalWindow;
import org.apache.doris.nereids.trees.plans.visitor.DefaultPlanRewriter;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
/**
* deep copy a plan
*/
public class LogicalPlanDeepCopier extends DefaultPlanRewriter<DeepCopierContext> {
public static LogicalPlanDeepCopier INSTANCE = new LogicalPlanDeepCopier();
public LogicalPlan deepCopy(LogicalPlan plan, DeepCopierContext context) {
return (LogicalPlan) plan.accept(this, context);
}
@Override
public Plan visitLogicalEmptyRelation(LogicalEmptyRelation emptyRelation, DeepCopierContext context) {
List<NamedExpression> newProjects = emptyRelation.getProjects().stream()
.map(p -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(p, context))
.collect(ImmutableList.toImmutableList());
return new LogicalEmptyRelation(StatementScopeIdGenerator.newRelationId(), newProjects);
}
@Override
public Plan visitLogicalOneRowRelation(LogicalOneRowRelation oneRowRelation, DeepCopierContext context) {
List<NamedExpression> newProjects = oneRowRelation.getProjects().stream()
.map(p -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(p, context))
.collect(ImmutableList.toImmutableList());
return new LogicalOneRowRelation(StatementScopeIdGenerator.newRelationId(), newProjects);
}
@Override
public Plan visitLogicalApply(LogicalApply<? extends Plan, ? extends Plan> apply, DeepCopierContext context) {
Plan left = apply.left().accept(this, context);
Plan right = apply.right().accept(this, context);
List<Expression> correlationSlot = apply.getCorrelationSlot().stream()
.map(s -> ExpressionDeepCopier.INSTANCE.deepCopy(s, context))
.collect(ImmutableList.toImmutableList());
SubqueryExpr subqueryExpr = (SubqueryExpr) ExpressionDeepCopier.INSTANCE
.deepCopy(apply.getSubqueryExpr(), context);
Optional<Expression> correlationFilter = apply.getCorrelationFilter()
.map(f -> ExpressionDeepCopier.INSTANCE.deepCopy(f, context));
Optional<MarkJoinSlotReference> markJoinSlotReference = apply.getMarkJoinSlotReference()
.map(m -> (MarkJoinSlotReference) ExpressionDeepCopier.INSTANCE.deepCopy(m, context));
Optional<Expression> subCorrespondingConjunct = apply.getSubCorrespondingConjunct()
.map(c -> ExpressionDeepCopier.INSTANCE.deepCopy(c, context));
return new LogicalApply<>(correlationSlot, subqueryExpr, correlationFilter,
markJoinSlotReference, subCorrespondingConjunct, apply.isNeedAddSubOutputToProjects(), left, right);
}
@Override
public Plan visitLogicalAggregate(LogicalAggregate<? extends Plan> aggregate, DeepCopierContext context) {
Plan child = aggregate.child().accept(this, context);
List<Expression> groupByExpressions = aggregate.getGroupByExpressions().stream()
.map(k -> ExpressionDeepCopier.INSTANCE.deepCopy(k, context))
.collect(ImmutableList.toImmutableList());
List<NamedExpression> outputExpressions = aggregate.getOutputExpressions().stream()
.map(o -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(o, context))
.collect(ImmutableList.toImmutableList());
return new LogicalAggregate<>(groupByExpressions, outputExpressions, child);
}
@Override
public Plan visitLogicalRepeat(LogicalRepeat<? extends Plan> repeat, DeepCopierContext context) {
Plan child = repeat.child().accept(this, context);
List<List<Expression>> groupingSets = repeat.getGroupingSets().stream()
.map(l -> l.stream()
.map(e -> ExpressionDeepCopier.INSTANCE.deepCopy(e, context))
.collect(ImmutableList.toImmutableList()))
.collect(ImmutableList.toImmutableList());
List<NamedExpression> outputExpressions = repeat.getOutputExpressions().stream()
.map(e -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(e, context))
.collect(ImmutableList.toImmutableList());
return new LogicalRepeat<>(groupingSets, outputExpressions, child);
}
@Override
public Plan visitLogicalFilter(LogicalFilter<? extends Plan> filter, DeepCopierContext context) {
Plan child = filter.child().accept(this, context);
Set<Expression> conjuncts = filter.getConjuncts().stream()
.map(p -> ExpressionDeepCopier.INSTANCE.deepCopy(p, context))
.collect(ImmutableSet.toImmutableSet());
return new LogicalFilter<>(conjuncts, child);
}
@Override
public Plan visitLogicalOlapScan(LogicalOlapScan olapScan, DeepCopierContext context) {
if (context.getRelationReplaceMap().containsKey(olapScan.getRelationId())) {
return context.getRelationReplaceMap().get(olapScan.getRelationId());
}
LogicalOlapScan newOlapScan;
if (olapScan.getManuallySpecifiedPartitions().isEmpty()) {
newOlapScan = new LogicalOlapScan(StatementScopeIdGenerator.newRelationId(),
olapScan.getTable(), olapScan.getQualifier(), olapScan.getHints());
} else {
newOlapScan = new LogicalOlapScan(StatementScopeIdGenerator.newRelationId(),
olapScan.getTable(), olapScan.getQualifier(),
olapScan.getManuallySpecifiedPartitions(), olapScan.getHints());
}
newOlapScan.getOutput();
context.putRelation(olapScan.getRelationId(), newOlapScan);
updateReplaceMapWithOutput(olapScan, newOlapScan, context.exprIdReplaceMap);
return newOlapScan;
}
@Override
public Plan visitLogicalSchemaScan(LogicalSchemaScan schemaScan, DeepCopierContext context) {
if (context.getRelationReplaceMap().containsKey(schemaScan.getRelationId())) {
return context.getRelationReplaceMap().get(schemaScan.getRelationId());
}
LogicalSchemaScan newSchemaScan = new LogicalSchemaScan(StatementScopeIdGenerator.newRelationId(),
schemaScan.getTable(), schemaScan.getQualifier());
updateReplaceMapWithOutput(schemaScan, newSchemaScan, context.exprIdReplaceMap);
context.putRelation(schemaScan.getRelationId(), newSchemaScan);
return newSchemaScan;
}
@Override
public Plan visitLogicalFileScan(LogicalFileScan fileScan, DeepCopierContext context) {
if (context.getRelationReplaceMap().containsKey(fileScan.getRelationId())) {
return context.getRelationReplaceMap().get(fileScan.getRelationId());
}
LogicalFileScan newFileScan = new LogicalFileScan(StatementScopeIdGenerator.newRelationId(),
fileScan.getTable(), fileScan.getQualifier());
updateReplaceMapWithOutput(fileScan, newFileScan, context.exprIdReplaceMap);
context.putRelation(fileScan.getRelationId(), newFileScan);
Set<Expression> conjuncts = fileScan.getConjuncts().stream()
.map(p -> ExpressionDeepCopier.INSTANCE.deepCopy(p, context))
.collect(ImmutableSet.toImmutableSet());
return newFileScan.withConjuncts(conjuncts);
}
@Override
public Plan visitLogicalTVFRelation(LogicalTVFRelation tvfRelation, DeepCopierContext context) {
if (context.getRelationReplaceMap().containsKey(tvfRelation.getRelationId())) {
return context.getRelationReplaceMap().get(tvfRelation.getRelationId());
}
LogicalTVFRelation newTVFRelation = new LogicalTVFRelation(StatementScopeIdGenerator.newRelationId(),
tvfRelation.getFunction());
updateReplaceMapWithOutput(newTVFRelation, tvfRelation, context.exprIdReplaceMap);
context.putRelation(tvfRelation.getRelationId(), newTVFRelation);
return newTVFRelation;
}
@Override
public Plan visitLogicalJdbcScan(LogicalJdbcScan jdbcScan, DeepCopierContext context) {
if (context.getRelationReplaceMap().containsKey(jdbcScan.getRelationId())) {
return context.getRelationReplaceMap().get(jdbcScan.getRelationId());
}
LogicalJdbcScan newJdbcScan = new LogicalJdbcScan(StatementScopeIdGenerator.newRelationId(),
jdbcScan.getTable(), jdbcScan.getQualifier());
updateReplaceMapWithOutput(jdbcScan, newJdbcScan, context.exprIdReplaceMap);
context.putRelation(jdbcScan.getRelationId(), newJdbcScan);
return newJdbcScan;
}
@Override
public Plan visitLogicalEsScan(LogicalEsScan esScan, DeepCopierContext context) {
if (context.getRelationReplaceMap().containsKey(esScan.getRelationId())) {
return context.getRelationReplaceMap().get(esScan.getRelationId());
}
LogicalEsScan newEsScan = new LogicalEsScan(StatementScopeIdGenerator.newRelationId(),
esScan.getTable(), esScan.getQualifier());
updateReplaceMapWithOutput(esScan, newEsScan, context.exprIdReplaceMap);
context.putRelation(esScan.getRelationId(), newEsScan);
return newEsScan;
}
@Override
public Plan visitLogicalProject(LogicalProject<? extends Plan> project, DeepCopierContext context) {
Plan child = project.child().accept(this, context);
List<NamedExpression> newProjects = project.getProjects().stream()
.map(p -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(p, context))
.collect(ImmutableList.toImmutableList());
return new LogicalProject<>(newProjects, child);
}
@Override
public Plan visitLogicalSort(LogicalSort<? extends Plan> sort, DeepCopierContext context) {
Plan child = sort.child().accept(this, context);
List<OrderKey> orderKeys = sort.getOrderKeys().stream()
.map(o -> new OrderKey(ExpressionDeepCopier.INSTANCE.deepCopy(o.getExpr(), context),
o.isAsc(), o.isNullFirst()))
.collect(ImmutableList.toImmutableList());
return new LogicalSort<>(orderKeys, child);
}
@Override
public Plan visitLogicalTopN(LogicalTopN<? extends Plan> topN, DeepCopierContext context) {
Plan child = topN.child().accept(this, context);
List<OrderKey> orderKeys = topN.getOrderKeys().stream()
.map(o -> new OrderKey(ExpressionDeepCopier.INSTANCE.deepCopy(o.getExpr(), context),
o.isAsc(), o.isNullFirst()))
.collect(ImmutableList.toImmutableList());
return new LogicalTopN<>(orderKeys, topN.getLimit(), topN.getOffset(), child);
}
@Override
public Plan visitLogicalPartitionTopN(LogicalPartitionTopN<? extends Plan> partitionTopN,
DeepCopierContext context) {
Plan child = partitionTopN.child().accept(this, context);
List<Expression> partitionKeys = partitionTopN.getPartitionKeys().stream()
.map(p -> ExpressionDeepCopier.INSTANCE.deepCopy(p, context))
.collect(ImmutableList.toImmutableList());
List<OrderExpression> orderKeys = partitionTopN.getOrderKeys().stream()
.map(o -> (OrderExpression) ExpressionDeepCopier.INSTANCE.deepCopy(o, context))
.collect(ImmutableList.toImmutableList());
return new LogicalPartitionTopN<>(partitionTopN.getFunction(), partitionKeys, orderKeys,
partitionTopN.hasGlobalLimit(), partitionTopN.getPartitionLimit(), child);
}
@Override
public Plan visitLogicalLimit(LogicalLimit<? extends Plan> limit, DeepCopierContext context) {
Plan child = limit.child().accept(this, context);
return new LogicalLimit<>(limit.getLimit(), limit.getOffset(), limit.getPhase(), child);
}
@Override
public Plan visitLogicalJoin(LogicalJoin<? extends Plan, ? extends Plan> join, DeepCopierContext context) {
List<Plan> children = join.children().stream()
.map(c -> c.accept(this, context))
.collect(ImmutableList.toImmutableList());
List<Expression> otherJoinConjuncts = join.getOtherJoinConjuncts().stream()
.map(c -> ExpressionDeepCopier.INSTANCE.deepCopy(c, context))
.collect(ImmutableList.toImmutableList());
List<Expression> hashJoinConjuncts = join.getHashJoinConjuncts().stream()
.map(c -> ExpressionDeepCopier.INSTANCE.deepCopy(c, context))
.collect(ImmutableList.toImmutableList());
return new LogicalJoin<>(join.getJoinType(), hashJoinConjuncts, otherJoinConjuncts,
join.getHint(), join.getMarkJoinSlotReference(), children);
}
@Override
public Plan visitLogicalAssertNumRows(LogicalAssertNumRows<? extends Plan> assertNumRows,
DeepCopierContext context) {
Plan child = assertNumRows.child().accept(this, context);
return new LogicalAssertNumRows<>(assertNumRows.getAssertNumRowsElement(), child);
}
@Override
public Plan visitLogicalHaving(LogicalHaving<? extends Plan> having, DeepCopierContext context) {
Plan child = having.child().accept(this, context);
Set<Expression> conjuncts = having.getConjuncts().stream()
.map(p -> ExpressionDeepCopier.INSTANCE.deepCopy(p, context))
.collect(ImmutableSet.toImmutableSet());
return new LogicalHaving<>(conjuncts, child);
}
@Override
public Plan visitLogicalUnion(LogicalUnion union, DeepCopierContext context) {
List<Plan> children = union.children().stream()
.map(c -> c.accept(this, context))
.collect(ImmutableList.toImmutableList());
List<List<NamedExpression>> constantExprsList = union.getConstantExprsList().stream()
.map(l -> l.stream()
.map(e -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(e, context))
.collect(ImmutableList.toImmutableList()))
.collect(ImmutableList.toImmutableList());
List<NamedExpression> outputs = union.getOutputs().stream()
.map(o -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(o, context))
.collect(ImmutableList.toImmutableList());
return new LogicalUnion(union.getQualifier(), outputs, constantExprsList, union.hasPushedFilter(), children);
}
@Override
public Plan visitLogicalExcept(LogicalExcept except, DeepCopierContext context) {
List<Plan> children = except.children().stream()
.map(c -> c.accept(this, context))
.collect(ImmutableList.toImmutableList());
List<NamedExpression> outputs = except.getOutputs().stream()
.map(o -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(o, context))
.collect(ImmutableList.toImmutableList());
return new LogicalExcept(except.getQualifier(), outputs, children);
}
@Override
public Plan visitLogicalIntersect(LogicalIntersect intersect, DeepCopierContext context) {
List<Plan> children = intersect.children().stream()
.map(c -> c.accept(this, context))
.collect(ImmutableList.toImmutableList());
List<NamedExpression> outputs = intersect.getOutputs().stream()
.map(o -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(o, context))
.collect(ImmutableList.toImmutableList());
return new LogicalIntersect(intersect.getQualifier(), outputs, children);
}
@Override
public Plan visitLogicalGenerate(LogicalGenerate<? extends Plan> generate, DeepCopierContext context) {
Plan child = generate.child().accept(this, context);
List<Function> generators = generate.getGenerators().stream()
.map(g -> (Function) ExpressionDeepCopier.INSTANCE.deepCopy(g, context))
.collect(ImmutableList.toImmutableList());
List<Slot> generatorOutput = generate.getGeneratorOutput().stream()
.map(o -> (Slot) ExpressionDeepCopier.INSTANCE.deepCopy(o, context))
.collect(ImmutableList.toImmutableList());
return new LogicalGenerate<>(generators, generatorOutput, child);
}
@Override
public Plan visitLogicalWindow(LogicalWindow<? extends Plan> window, DeepCopierContext context) {
Plan child = window.child().accept(this, context);
List<NamedExpression> windowExpressions = window.getWindowExpressions().stream()
.map(w -> (NamedExpression) ExpressionDeepCopier.INSTANCE.deepCopy(w, context))
.collect(ImmutableList.toImmutableList());
return new LogicalWindow<>(windowExpressions, child);
}
@Override
public Plan visitLogicalOlapTableSink(LogicalOlapTableSink<? extends Plan> olapTableSink,
DeepCopierContext context) {
Plan child = olapTableSink.child().accept(this, context);
return new LogicalOlapTableSink<>(olapTableSink.getDatabase(), olapTableSink.getTargetTable(),
olapTableSink.getCols(), olapTableSink.getPartitionIds(), child);
}
@Override
public Plan visitLogicalFileSink(LogicalFileSink<? extends Plan> fileSink, DeepCopierContext context) {
Plan child = fileSink.child().accept(this, context);
return fileSink.withChildren(child);
}
@Override
public Plan visitLogicalCTEProducer(LogicalCTEProducer<? extends Plan> cteProducer, DeepCopierContext context) {
throw new AnalysisException("plan deep copier could not copy CTEProducer.");
}
@Override
public Plan visitLogicalCTEConsumer(LogicalCTEConsumer cteConsumer, DeepCopierContext context) {
if (context.getRelationReplaceMap().containsKey(cteConsumer.getRelationId())) {
return context.getRelationReplaceMap().get(cteConsumer.getRelationId());
}
Map<Slot, Slot> consumerToProducerOutputMap = new LinkedHashMap<>();
Map<Slot, Slot> producerToConsumerOutputMap = new LinkedHashMap<>();
for (Slot consumerOutput : cteConsumer.getOutput()) {
Slot newOutput = (Slot) ExpressionDeepCopier.INSTANCE.deepCopy(consumerOutput, context);
consumerToProducerOutputMap.put(newOutput, cteConsumer.getProducerSlot(consumerOutput));
producerToConsumerOutputMap.put(cteConsumer.getProducerSlot(consumerOutput), newOutput);
}
LogicalCTEConsumer newCTEConsumer = new LogicalCTEConsumer(
StatementScopeIdGenerator.newRelationId(),
cteConsumer.getCteId(), cteConsumer.getName(),
consumerToProducerOutputMap, producerToConsumerOutputMap);
context.putRelation(cteConsumer.getRelationId(), newCTEConsumer);
return newCTEConsumer;
}
@Override
public Plan visitLogicalCTEAnchor(LogicalCTEAnchor<? extends Plan, ? extends Plan> cteAnchor,
DeepCopierContext context) {
throw new AnalysisException("plan deep copier could not copy CTEAnchor.");
}
private void updateReplaceMapWithOutput(Plan oldPlan, Plan newPlan, Map<ExprId, ExprId> replaceMap) {
List<Slot> oldOutput = oldPlan.getOutput();
List<Slot> newOutput = newPlan.getOutput();
for (int i = 0; i < newOutput.size(); i++) {
replaceMap.put(oldOutput.get(i).getExprId(), newOutput.get(i).getExprId());
}
}
}

View File

@ -34,11 +34,12 @@ import java.util.Optional;
* Exists subquery expression.
*/
public class Exists extends SubqueryExpr implements LeafExpression {
private final boolean isNot;
public Exists(LogicalPlan subquery, boolean isNot) {
super(Objects.requireNonNull(subquery, "subquery can not be null"));
this.isNot = Objects.requireNonNull(isNot, "isNot can not be null");
this.isNot = isNot;
}
public Exists(LogicalPlan subquery, List<Slot> correlateSlots, boolean isNot) {
@ -52,7 +53,7 @@ public class Exists extends SubqueryExpr implements LeafExpression {
super(Objects.requireNonNull(subquery, "subquery can not be null"),
Objects.requireNonNull(correlateSlots, "subquery can not be null"),
typeCoercionExpr);
this.isNot = Objects.requireNonNull(isNot, "isNot can not be null");
this.isNot = isNot;
}
public boolean isNot() {

View File

@ -20,6 +20,8 @@ package org.apache.doris.nereids.trees.expressions;
import org.apache.doris.nereids.trees.expressions.visitor.ExpressionVisitor;
import org.apache.doris.nereids.types.BooleanType;
import com.google.common.collect.ImmutableList;
/**
* A special type of column that will be generated to replace the subquery when unnesting the subquery of MarkJoin.
*/
@ -36,6 +38,11 @@ public class MarkJoinSlotReference extends SlotReference implements SlotNotFromC
this.existsHasAgg = existsHasAgg;
}
public MarkJoinSlotReference(ExprId exprId, String name, boolean existsHasAgg) {
super(exprId, name, BooleanType.INSTANCE, false, ImmutableList.of());
this.existsHasAgg = existsHasAgg;
}
@Override
public <R, C> R accept(ExpressionVisitor<R, C> visitor, C context) {
return visitor.visitMarkJoinReference(this, context);
@ -61,4 +68,9 @@ public class MarkJoinSlotReference extends SlotReference implements SlotNotFromC
public boolean isExistsHasAgg() {
return existsHasAgg;
}
@Override
public MarkJoinSlotReference withExprId(ExprId exprId) {
return new MarkJoinSlotReference(exprId, name, existsHasAgg);
}
}

View File

@ -35,11 +35,15 @@ public abstract class Slot extends NamedExpression implements LeafExpression {
throw new RuntimeException("Do not implement");
}
public Slot withQualifier(List<String> qualifiers) {
public Slot withQualifier(List<String> qualifier) {
throw new RuntimeException("Do not implement");
}
public Slot withName(String name) {
throw new RuntimeException("Do not implement");
}
public Slot withExprId(ExprId exprId) {
throw new RuntimeException("Do not implement");
}
}

View File

@ -33,13 +33,11 @@ import javax.annotation.Nullable;
* Reference to slot in expression.
*/
public class SlotReference extends Slot {
private final ExprId exprId;
// TODO: we should distinguish the name is alias or column name, and the column name should contains
// `cluster:db`.`table`.`column`
private final String name;
private final DataType dataType;
private final boolean nullable;
private final List<String> qualifier;
protected final ExprId exprId;
protected final String name;
protected final DataType dataType;
protected final boolean nullable;
protected final List<String> qualifier;
private final Column column;
public SlotReference(String name, DataType dataType) {
@ -182,6 +180,7 @@ public class SlotReference extends Slot {
return this;
}
@Override
public SlotReference withNullable(boolean newNullable) {
if (this.nullable == newNullable) {
return this;
@ -190,16 +189,21 @@ public class SlotReference extends Slot {
}
@Override
public SlotReference withQualifier(List<String> qualifiers) {
return new SlotReference(exprId, name, dataType, nullable, qualifiers, column);
public SlotReference withQualifier(List<String> qualifier) {
return new SlotReference(exprId, name, dataType, nullable, qualifier, column);
}
@Override
public SlotReference withName(String name) {
return new SlotReference(exprId, name, dataType, nullable, qualifier, column);
}
@Override
public SlotReference withExprId(ExprId exprId) {
return new SlotReference(exprId, name, dataType, nullable, qualifier, column);
}
public boolean isVisible() {
return column == null || column.isVisible();
}
@Override
public Slot withName(String name) {
return new SlotReference(exprId, name, dataType, nullable, qualifier, column);
}
}

View File

@ -19,6 +19,7 @@ package org.apache.doris.nereids.trees.expressions;
import org.apache.doris.nereids.StatementContext;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.qe.ConnectContext;
import com.google.common.annotations.VisibleForTesting;
@ -47,6 +48,14 @@ public class StatementScopeIdGenerator {
return ConnectContext.get().getStatementContext().getNextObjectId();
}
public static RelationId newRelationId() {
// this branch is for test only
if (ConnectContext.get() == null || ConnectContext.get().getStatementContext() == null) {
return statementContext.getNextRelationId();
}
return ConnectContext.get().getStatementContext().getNextRelationId();
}
public static CTEId newCTEId() {
// this branch is for test only
if (ConnectContext.get() == null || ConnectContext.get().getStatementContext() == null) {

View File

@ -62,6 +62,10 @@ public abstract class SubqueryExpr extends Expression {
return typeCoercionExpr.orElseGet(() -> queryPlan.getOutput().get(0));
}
public Expression getSubqueryOutput(LogicalPlan queryPlan) {
return typeCoercionExpr.orElseGet(() -> queryPlan.getOutput().get(0));
}
@Override
public DataType getDataType() throws UnboundException {
throw new UnboundException("getDataType");

View File

@ -119,4 +119,30 @@ public class VirtualSlotReference extends SlotReference implements SlotNotFromCh
public boolean nullable() {
return false;
}
public VirtualSlotReference withNullable(boolean newNullable) {
if (this.nullable == newNullable) {
return this;
}
return new VirtualSlotReference(exprId, name, dataType, newNullable, qualifier,
originExpression, computeLongValueMethod);
}
@Override
public VirtualSlotReference withQualifier(List<String> qualifier) {
return new VirtualSlotReference(exprId, name, dataType, nullable, qualifier,
originExpression, computeLongValueMethod);
}
@Override
public VirtualSlotReference withName(String name) {
return new VirtualSlotReference(exprId, name, dataType, nullable, qualifier,
originExpression, computeLongValueMethod);
}
@Override
public VirtualSlotReference withExprId(ExprId exprId) {
return new VirtualSlotReference(exprId, name, dataType, nullable, qualifier,
originExpression, computeLongValueMethod);
}
}

View File

@ -41,7 +41,8 @@ import java.util.stream.Collectors;
/** TableValuedFunction */
public abstract class TableValuedFunction extends BoundFunction implements UnaryExpression, CustomSignature {
protected final Supplier<TableValuedFunctionIf> catalogFunctionCache = Suppliers.memoize(() -> toCatalogFunction());
protected final Supplier<TableValuedFunctionIf> catalogFunctionCache = Suppliers.memoize(this::toCatalogFunction);
protected final Supplier<FunctionGenTable> tableCache = Suppliers.memoize(() -> {
try {
return catalogFunctionCache.get().getTable();

View File

@ -63,6 +63,6 @@ public class ObjectId extends Id<ObjectId> {
@Override
public String toString() {
return "RelationId#" + id;
return "ObjectId#" + id;
}
}

View File

@ -75,7 +75,7 @@ public interface Plan extends TreeNode<Plan> {
/**
* Get extra plans.
*/
default List<Plan> extraPlans() {
default List<? extends Plan> extraPlans() {
return ImmutableList.of();
}

View File

@ -0,0 +1,68 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.trees.plans;
import org.apache.doris.common.Id;
import org.apache.doris.common.IdGenerator;
import org.apache.doris.nereids.trees.expressions.StatementScopeIdGenerator;
import java.util.Objects;
/**
* relation id
*/
public class RelationId extends Id<RelationId> {
public RelationId(int id) {
super(id);
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
RelationId relationId = (RelationId) o;
return id == relationId.id;
}
/**
* Should be only called by {@link StatementScopeIdGenerator}.
*/
public static IdGenerator<RelationId> createGenerator() {
return new IdGenerator<RelationId>() {
@Override
public RelationId getNextId() {
return new RelationId(nextId++);
}
};
}
@Override
public int hashCode() {
return Objects.hash(id);
}
@Override
public String toString() {
return "RelationId#" + id;
}
}

View File

@ -18,12 +18,13 @@
package org.apache.doris.nereids.trees.plans.algebra;
import org.apache.doris.catalog.Database;
import org.apache.doris.catalog.Table;
import org.apache.doris.catalog.TableIf;
import org.apache.doris.nereids.exceptions.AnalysisException;
/** CatalogRelation */
public interface CatalogRelation extends Relation {
Table getTable();
TableIf getTable();
Database getDatabase() throws AnalysisException;
}

View File

@ -22,7 +22,8 @@ import org.apache.doris.catalog.OlapTable;
import java.util.List;
/** OlapScan */
public interface OlapScan extends Scan {
public interface OlapScan {
OlapTable getTable();
long getSelectedIndexId();
@ -39,10 +40,10 @@ public interface OlapScan extends Scan {
}
OlapTable olapTable = getTable();
Integer selectTabletNumInPartitions = getSelectedPartitionIds().stream()
.map(partitionId -> olapTable.getPartition(partitionId))
int selectTabletNumInPartitions = getSelectedPartitionIds().stream()
.map(olapTable::getPartition)
.map(partition -> partition.getDistributionInfo().getBucketNum())
.reduce((b1, b2) -> b1 + b2)
.reduce(Integer::sum)
.orElse(0);
if (selectTabletNumInPartitions > 0) {
return selectTabletNumInPartitions;
@ -52,7 +53,7 @@ public interface OlapScan extends Scan {
return olapTable.getAllPartitions()
.stream()
.map(partition -> partition.getDistributionInfo().getBucketNum())
.reduce((b1, b2) -> b1 + b2)
.reduce(Integer::sum)
.orElse(0);
}
}

View File

@ -18,6 +18,7 @@
package org.apache.doris.nereids.trees.plans.algebra;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.RelationId;
import java.util.List;
@ -25,5 +26,8 @@ import java.util.List;
* Relation base interface
*/
public interface Relation {
RelationId getRelationId();
List<Slot> getOutput();
}

View File

@ -1,27 +0,0 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.trees.plans.algebra;
import org.apache.doris.catalog.TableIf;
/**
* Common interface for logical/physical scan.
*/
public interface Scan extends Relation {
TableIf getTable();
}

View File

@ -35,7 +35,7 @@ import java.util.function.Supplier;
*/
public abstract class AbstractLogicalPlan extends AbstractPlan implements LogicalPlan, Explainable {
private Supplier<Boolean> hasUnboundExpressions = () -> super.hasUnboundExpression();
private final Supplier<Boolean> hasUnboundExpressions = super::hasUnboundExpression;
public AbstractLogicalPlan(PlanType type, Plan... children) {
super(type, children);

View File

@ -60,7 +60,7 @@ public class LogicalAggregate<CHILD_TYPE extends Plan>
private final List<NamedExpression> outputExpressions;
// When there are grouping sets/rollup/cube, LogicalAgg is generated by LogicalRepeat.
private final Optional<LogicalRepeat> sourceRepeat;
private final Optional<LogicalRepeat<?>> sourceRepeat;
private final boolean normalized;
private final boolean ordinalIsResolved;
@ -105,7 +105,7 @@ public class LogicalAggregate<CHILD_TYPE extends Plan>
public LogicalAggregate(
List<Expression> groupByExpressions,
List<NamedExpression> outputExpressions,
Optional<LogicalRepeat> sourceRepeat,
Optional<LogicalRepeat<?>> sourceRepeat,
CHILD_TYPE child) {
this(groupByExpressions, outputExpressions, false, sourceRepeat, child);
}
@ -114,7 +114,7 @@ public class LogicalAggregate<CHILD_TYPE extends Plan>
List<Expression> groupByExpressions,
List<NamedExpression> outputExpressions,
boolean normalized,
Optional<LogicalRepeat> sourceRepeat,
Optional<LogicalRepeat<?>> sourceRepeat,
CHILD_TYPE child) {
this(groupByExpressions, outputExpressions, normalized, false, false, false, sourceRepeat,
Optional.empty(), Optional.empty(), child);
@ -130,7 +130,7 @@ public class LogicalAggregate<CHILD_TYPE extends Plan>
boolean ordinalIsResolved,
boolean generated,
boolean hasPushed,
Optional<LogicalRepeat> sourceRepeat,
Optional<LogicalRepeat<?>> sourceRepeat,
Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties,
CHILD_TYPE child) {
@ -156,14 +156,10 @@ public class LogicalAggregate<CHILD_TYPE extends Plan>
return outputExpressions.stream().map(ExpressionTrait::toSql).collect(Collectors.joining(", "));
}
public Optional<LogicalRepeat> getSourceRepeat() {
public Optional<LogicalRepeat<?>> getSourceRepeat() {
return sourceRepeat;
}
public boolean hasRepeat() {
return sourceRepeat.isPresent();
}
public boolean isDistinct() {
return outputExpressions.equals(groupByExpressions);
}
@ -223,7 +219,7 @@ public class LogicalAggregate<CHILD_TYPE extends Plan>
if (o == null || getClass() != o.getClass()) {
return false;
}
LogicalAggregate that = (LogicalAggregate) o;
LogicalAggregate<?> that = (LogicalAggregate<?>) o;
return Objects.equals(groupByExpressions, that.groupByExpressions)
&& Objects.equals(outputExpressions, that.outputExpressions)
&& normalized == that.normalized

View File

@ -19,7 +19,6 @@ package org.apache.doris.nereids.trees.plans.logical;
import org.apache.doris.nereids.memo.GroupExpression;
import org.apache.doris.nereids.properties.LogicalProperties;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.Plan;
@ -29,11 +28,8 @@ import org.apache.doris.nereids.util.Utils;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
@ -44,35 +40,14 @@ public class LogicalCTE<CHILD_TYPE extends Plan> extends LogicalUnary<CHILD_TYPE
private final List<LogicalSubQueryAlias<Plan>> aliasQueries;
private final Map<String, CTEId> cteNameToId;
private final boolean registered;
public LogicalCTE(List<LogicalSubQueryAlias<Plan>> aliasQueries, CHILD_TYPE child) {
this(aliasQueries, Optional.empty(), Optional.empty(), child, false, null);
}
public LogicalCTE(List<LogicalSubQueryAlias<Plan>> aliasQueries, CHILD_TYPE child, boolean registered,
Map<String, CTEId> cteNameToId) {
this(aliasQueries, Optional.empty(), Optional.empty(), child, registered,
cteNameToId);
this(aliasQueries, Optional.empty(), Optional.empty(), child);
}
public LogicalCTE(List<LogicalSubQueryAlias<Plan>> aliasQueries, Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, CHILD_TYPE child,
boolean registered, Map<String, CTEId> cteNameToId) {
Optional<LogicalProperties> logicalProperties, CHILD_TYPE child) {
super(PlanType.LOGICAL_CTE, groupExpression, logicalProperties, child);
this.aliasQueries = ImmutableList.copyOf(Objects.requireNonNull(aliasQueries, "aliasQueries can not be null"));
this.registered = registered;
this.cteNameToId = cteNameToId == null ? ImmutableMap.copyOf(initCTEId()) : cteNameToId;
}
private Map<String, CTEId> initCTEId() {
Map<String, CTEId> subQueryAliasToUniqueId = new HashMap<>();
for (LogicalSubQueryAlias<Plan> subQueryAlias : aliasQueries) {
subQueryAliasToUniqueId.put(subQueryAlias.getAlias(), subQueryAlias.getCteId());
}
return subQueryAliasToUniqueId;
}
public List<LogicalSubQueryAlias<Plan>> getAliasQueries() {
@ -80,8 +55,8 @@ public class LogicalCTE<CHILD_TYPE extends Plan> extends LogicalUnary<CHILD_TYPE
}
@Override
public List<Plan> extraPlans() {
return (List) aliasQueries;
public List<? extends Plan> extraPlans() {
return aliasQueries;
}
/**
@ -126,7 +101,7 @@ public class LogicalCTE<CHILD_TYPE extends Plan> extends LogicalUnary<CHILD_TYPE
@Override
public Plan withChildren(List<Plan> children) {
Preconditions.checkArgument(aliasQueries.size() > 0);
return new LogicalCTE<>(aliasQueries, children.get(0), registered, cteNameToId);
return new LogicalCTE<>(aliasQueries, children.get(0));
}
@Override
@ -141,30 +116,13 @@ public class LogicalCTE<CHILD_TYPE extends Plan> extends LogicalUnary<CHILD_TYPE
@Override
public LogicalCTE<CHILD_TYPE> withGroupExpression(Optional<GroupExpression> groupExpression) {
return new LogicalCTE<>(aliasQueries, groupExpression, Optional.of(getLogicalProperties()), child(),
registered, cteNameToId);
return new LogicalCTE<>(aliasQueries, groupExpression, Optional.of(getLogicalProperties()), child());
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
Preconditions.checkArgument(aliasQueries.size() > 0);
return new LogicalCTE<>(aliasQueries, groupExpression, logicalProperties, children.get(0),
registered, cteNameToId);
}
public boolean isRegistered() {
return registered;
}
public CTEId findCTEId(String subQueryAlias) {
CTEId id = cteNameToId.get(subQueryAlias);
Preconditions.checkArgument(id != null, "Cannot find id for sub-query : %s",
subQueryAlias);
return id;
}
public Map<String, CTEId> getCteNameToId() {
return cteNameToId;
return new LogicalCTE<>(aliasQueries, groupExpression, logicalProperties, children.get(0));
}
}

View File

@ -40,21 +40,19 @@ public class LogicalCTEAnchor<LEFT_CHILD_TYPE extends Plan,
private final CTEId cteId;
public LogicalCTEAnchor(LEFT_CHILD_TYPE leftChild, RIGHT_CHILD_TYPE rightChild, CTEId cteId) {
this(Optional.empty(), Optional.empty(), leftChild, rightChild, cteId);
public LogicalCTEAnchor(CTEId cteId, LEFT_CHILD_TYPE leftChild, RIGHT_CHILD_TYPE rightChild) {
this(cteId, Optional.empty(), Optional.empty(), leftChild, rightChild);
}
public LogicalCTEAnchor(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties,
LEFT_CHILD_TYPE leftChild, RIGHT_CHILD_TYPE rightChild, CTEId cteId) {
public LogicalCTEAnchor(CTEId cteId, Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, LEFT_CHILD_TYPE leftChild, RIGHT_CHILD_TYPE rightChild) {
super(PlanType.LOGICAL_CTE_ANCHOR, groupExpression, logicalProperties, leftChild, rightChild);
this.cteId = cteId;
}
@Override
public Plan withChildren(List<Plan> children) {
return new LogicalCTEAnchor<>(groupExpression, Optional.of(getLogicalProperties()),
children.get(0), children.get(1), cteId);
return new LogicalCTEAnchor<>(cteId, children.get(0), children.get(1));
}
@Override
@ -69,13 +67,13 @@ public class LogicalCTEAnchor<LEFT_CHILD_TYPE extends Plan,
@Override
public Plan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new LogicalCTEAnchor<>(groupExpression, Optional.of(getLogicalProperties()), left(), right(), cteId);
return new LogicalCTEAnchor<>(cteId, groupExpression, Optional.of(getLogicalProperties()), left(), right());
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new LogicalCTEAnchor<>(groupExpression, logicalProperties, children.get(0), children.get(1), cteId);
return new LogicalCTEAnchor<>(cteId, groupExpression, logicalProperties, children.get(0), children.get(1));
}
@Override

View File

@ -20,12 +20,11 @@ package org.apache.doris.nereids.trees.plans.logical;
import org.apache.doris.nereids.memo.GroupExpression;
import org.apache.doris.nereids.properties.LogicalProperties;
import org.apache.doris.nereids.trees.expressions.CTEId;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.expressions.StatementScopeIdGenerator;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.visitor.PlanVisitor;
import org.apache.doris.nereids.util.Utils;
@ -35,59 +34,67 @@ import com.google.common.collect.ImmutableList;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
/**
* LogicalCTEConsumer
*/
public class LogicalCTEConsumer extends LogicalLeaf {
private final CTEId cteId;
private final Map<Slot, Slot> consumerToProducerOutputMap = new LinkedHashMap<>();
private final Map<Slot, Slot> producerToConsumerOutputMap = new LinkedHashMap<>();
private final int consumerId;
public class LogicalCTEConsumer extends LogicalRelation {
private final String name;
private final CTEId cteId;
private final Map<Slot, Slot> consumerToProducerOutputMap;
private final Map<Slot, Slot> producerToConsumerOutputMap;
/**
* Logical CTE consumer.
*/
public LogicalCTEConsumer(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, LogicalPlan childPlan, CTEId cteId, String name) {
super(PlanType.LOGICAL_CTE_RELATION, groupExpression, logicalProperties);
this.cteId = cteId;
this.name = name;
initProducerToConsumerOutputMap(childPlan);
for (Map.Entry<Slot, Slot> entry : producerToConsumerOutputMap.entrySet()) {
this.consumerToProducerOutputMap.put(entry.getValue(), entry.getKey());
}
this.consumerId = StatementScopeIdGenerator.newCTEId().asInt();
public LogicalCTEConsumer(RelationId relationId, CTEId cteId, String name,
Map<Slot, Slot> consumerToProducerOutputMap, Map<Slot, Slot> producerToConsumerOutputMap) {
super(relationId, PlanType.LOGICAL_CTE_RELATION, Optional.empty(), Optional.empty());
this.cteId = Objects.requireNonNull(cteId, "cteId should not null");
this.name = Objects.requireNonNull(name, "name should not null");
this.consumerToProducerOutputMap = Objects.requireNonNull(consumerToProducerOutputMap,
"consumerToProducerOutputMap should not null");
this.producerToConsumerOutputMap = Objects.requireNonNull(producerToConsumerOutputMap,
"producerToConsumerOutputMap should not null");
}
/**
* Logical CTE consumer.
*/
public LogicalCTEConsumer(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, CTEId cteId,
Map<Slot, Slot> consumerToProducerOutputMap,
Map<Slot, Slot> producerToConsumerOutputMap, int consumerId, String name) {
super(PlanType.LOGICAL_CTE_RELATION, groupExpression, logicalProperties);
this.cteId = cteId;
this.consumerToProducerOutputMap.putAll(consumerToProducerOutputMap);
this.producerToConsumerOutputMap.putAll(producerToConsumerOutputMap);
this.consumerId = consumerId;
this.name = name;
public LogicalCTEConsumer(RelationId relationId, CTEId cteId, String name, LogicalPlan producerPlan) {
super(relationId, PlanType.LOGICAL_CTE_RELATION, Optional.empty(), Optional.empty());
this.cteId = Objects.requireNonNull(cteId, "cteId should not null");
this.name = Objects.requireNonNull(name, "name should not null");
this.consumerToProducerOutputMap = new LinkedHashMap<>();
this.producerToConsumerOutputMap = new LinkedHashMap<>();
initOutputMaps(producerPlan);
}
private void initProducerToConsumerOutputMap(LogicalPlan childPlan) {
/**
* Logical CTE consumer.
*/
public LogicalCTEConsumer(RelationId relationId, CTEId cteId, String name,
Map<Slot, Slot> consumerToProducerOutputMap, Map<Slot, Slot> producerToConsumerOutputMap,
Optional<GroupExpression> groupExpression, Optional<LogicalProperties> logicalProperties) {
super(relationId, PlanType.LOGICAL_CTE_RELATION, groupExpression, logicalProperties);
this.cteId = Objects.requireNonNull(cteId, "cteId should not null");
this.name = Objects.requireNonNull(name, "name should not null");
this.consumerToProducerOutputMap = Objects.requireNonNull(consumerToProducerOutputMap,
"consumerToProducerOutputMap should not null");
this.producerToConsumerOutputMap = Objects.requireNonNull(producerToConsumerOutputMap,
"producerToConsumerOutputMap should not null");
}
private void initOutputMaps(LogicalPlan childPlan) {
List<Slot> producerOutput = childPlan.getOutput();
for (Slot producerOutputSlot : producerOutput) {
Slot consumerSlot = new SlotReference(producerOutputSlot.getName(),
producerOutputSlot.getDataType(), producerOutputSlot.nullable(), ImmutableList.of(name));
producerToConsumerOutputMap.put(producerOutputSlot, consumerSlot);
consumerToProducerOutputMap.put(consumerSlot, producerOutputSlot);
}
}
@ -104,26 +111,25 @@ public class LogicalCTEConsumer extends LogicalLeaf {
return visitor.visitLogicalCTEConsumer(this, context);
}
@Override
public List<? extends Expression> getExpressions() {
return ImmutableList.of();
public Plan withTwoMaps(Map<Slot, Slot> consumerToProducerOutputMap, Map<Slot, Slot> producerToConsumerOutputMap) {
return new LogicalCTEConsumer(relationId, cteId, name,
consumerToProducerOutputMap, producerToConsumerOutputMap,
Optional.empty(), Optional.empty());
}
@Override
public Plan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new LogicalCTEConsumer(groupExpression, Optional.of(getLogicalProperties()), cteId,
consumerToProducerOutputMap,
producerToConsumerOutputMap,
consumerId, name);
return new LogicalCTEConsumer(relationId, cteId, name,
consumerToProducerOutputMap, producerToConsumerOutputMap,
groupExpression, Optional.of(getLogicalProperties()));
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new LogicalCTEConsumer(groupExpression, logicalProperties, cteId,
consumerToProducerOutputMap,
producerToConsumerOutputMap,
consumerId, name);
return new LogicalCTEConsumer(relationId, cteId, name,
consumerToProducerOutputMap, producerToConsumerOutputMap,
groupExpression, logicalProperties);
}
@Override
@ -135,31 +141,11 @@ public class LogicalCTEConsumer extends LogicalLeaf {
return cteId;
}
@Override
public int hashCode() {
return consumerId;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
return this.consumerId == ((LogicalCTEConsumer) o).consumerId;
}
public int getConsumerId() {
return consumerId;
}
public String getName() {
return name;
}
public Slot findProducerSlot(Slot consumerSlot) {
public Slot getProducerSlot(Slot consumerSlot) {
Slot slot = consumerToProducerOutputMap.get(consumerSlot);
Preconditions.checkArgument(slot != null, String.format("Required producer"
+ "slot for :%s doesn't exist", consumerSlot));
@ -170,6 +156,7 @@ public class LogicalCTEConsumer extends LogicalLeaf {
public String toString() {
return Utils.toSqlString("LogicalCteConsumer[" + id.asInt() + "]",
"cteId", cteId,
"consumerId", consumerId);
"relationId", relationId,
"name", name);
}
}

View File

@ -25,6 +25,7 @@ import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.visitor.PlanVisitor;
import org.apache.doris.nereids.util.Utils;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
@ -40,45 +41,30 @@ public class LogicalCTEProducer<CHILD_TYPE extends Plan> extends LogicalUnary<CH
private final CTEId cteId;
private final List<Slot> projects;
private final boolean rewritten;
public LogicalCTEProducer(CHILD_TYPE child, CTEId cteId) {
public LogicalCTEProducer(CTEId cteId, CHILD_TYPE child) {
super(PlanType.LOGICAL_CTE_PRODUCER, child);
this.cteId = cteId;
this.projects = ImmutableList.of();
this.rewritten = false;
}
public LogicalCTEProducer(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, CHILD_TYPE child, CTEId cteId,
List<Slot> projects, boolean rewritten) {
public LogicalCTEProducer(CTEId cteId, Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, CHILD_TYPE child) {
super(PlanType.LOGICAL_CTE_PRODUCER, groupExpression, logicalProperties, child);
this.cteId = cteId;
this.projects = ImmutableList.copyOf(Objects.requireNonNull(projects,
"projects should not null"));
this.rewritten = rewritten;
}
public CTEId getCteId() {
return cteId;
}
public List<Slot> getProjects() {
return projects;
}
@Override
public Plan withChildren(List<Plan> children) {
Preconditions.checkArgument(children.size() == 1);
return new LogicalCTEProducer<>(groupExpression, Optional.of(getLogicalProperties()), children.get(0),
cteId, projects, rewritten);
return new LogicalCTEProducer<>(cteId, children.get(0));
}
public Plan withChildrenAndProjects(List<Plan> children, List<Slot> projects, boolean rewritten) {
return new LogicalCTEProducer<>(groupExpression, Optional.of(getLogicalProperties()), children.get(0),
cteId, projects, rewritten);
@Override
public List<Expression> getExpressions() {
return ImmutableList.of();
}
@Override
@ -86,41 +72,26 @@ public class LogicalCTEProducer<CHILD_TYPE extends Plan> extends LogicalUnary<CH
return visitor.visitLogicalCTEProducer(this, context);
}
@Override
public List<? extends Expression> getExpressions() {
return child().getExpressions();
}
@Override
public Plan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new LogicalCTEProducer<>(groupExpression, Optional.of(getLogicalProperties()), child(), cteId,
projects, rewritten);
return new LogicalCTEProducer<>(cteId, groupExpression, Optional.of(getLogicalProperties()), child());
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new LogicalCTEProducer<>(groupExpression, logicalProperties, children.get(0), cteId,
projects, rewritten);
return new LogicalCTEProducer<>(cteId, groupExpression, logicalProperties, children.get(0));
}
@Override
public List<Slot> computeOutput() {
return child().computeOutput();
return child().getOutput();
}
@Override
public String toString() {
return String.format("LOGICAL_CTE_PRODUCER#%d", cteId.asInt());
}
public boolean isRewritten() {
return rewritten;
}
@Override
public int hashCode() {
return Objects.hash(cteId, projects, rewritten);
return Utils.toSqlString("LogicalCteProducer[" + id.asInt() + "]",
"cteId", cteId);
}
@Override
@ -131,13 +102,15 @@ public class LogicalCTEProducer<CHILD_TYPE extends Plan> extends LogicalUnary<CH
if (o == null || getClass() != o.getClass()) {
return false;
}
LogicalCTEProducer p = (LogicalCTEProducer) o;
if (cteId != p.cteId) {
if (!super.equals(o)) {
return false;
}
if (rewritten != p.rewritten) {
return false;
}
return projects.equals(p.projects);
LogicalCTEProducer<?> that = (LogicalCTEProducer<?>) o;
return Objects.equals(cteId, that.cteId);
}
@Override
public int hashCode() {
return Objects.hash(super.hashCode(), cteId);
}
}

View File

@ -0,0 +1,98 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.
package org.apache.doris.nereids.trees.plans.logical;
import org.apache.doris.catalog.Database;
import org.apache.doris.catalog.Env;
import org.apache.doris.catalog.TableIf;
import org.apache.doris.nereids.exceptions.AnalysisException;
import org.apache.doris.nereids.memo.GroupExpression;
import org.apache.doris.nereids.properties.LogicalProperties;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.expressions.SlotReference;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.algebra.CatalogRelation;
import org.apache.doris.nereids.util.Utils;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
/**
* abstract class catalog relation for logical relation
*/
public abstract class LogicalCatalogRelation extends LogicalRelation implements CatalogRelation {
protected final TableIf table;
protected final ImmutableList<String> qualifier;
public LogicalCatalogRelation(RelationId relationId, PlanType type, TableIf table, List<String> qualifier) {
super(relationId, type);
this.table = Objects.requireNonNull(table, "table can not be null");
this.qualifier = ImmutableList.copyOf(Objects.requireNonNull(qualifier, "qualifier can not be null"));
}
public LogicalCatalogRelation(RelationId relationId, PlanType type, TableIf table, List<String> qualifier,
Optional<GroupExpression> groupExpression, Optional<LogicalProperties> logicalProperties) {
super(relationId, type, groupExpression, logicalProperties);
this.table = Objects.requireNonNull(table, "table can not be null");
this.qualifier = ImmutableList.copyOf(Objects.requireNonNull(qualifier, "qualifier can not be null"));
}
@Override
public TableIf getTable() {
return table;
}
@Override
public Database getDatabase() throws AnalysisException {
Preconditions.checkArgument(!qualifier.isEmpty());
return Env.getCurrentInternalCatalog().getDbOrException(qualifier.get(0),
s -> new AnalysisException("Database [" + qualifier.get(0) + "] does not exist."));
}
@Override
public List<Slot> computeOutput() {
return table.getBaseSchema()
.stream()
.map(col -> SlotReference.fromColumn(col, qualified()))
.collect(ImmutableList.toImmutableList());
}
public List<String> getQualifier() {
return qualifier;
}
/**
* Full qualified name parts, i.e., concat qualifier and name into a list.
*/
public List<String> qualified() {
return Utils.qualifiedNameParts(qualifier, table.getName());
}
/**
* Full qualified table name, concat qualifier and name with `.` as separator.
*/
public String qualifiedName() {
return Utils.qualifiedName(qualifier, table.getName());
}
}

View File

@ -19,11 +19,11 @@ package org.apache.doris.nereids.trees.plans.logical;
import org.apache.doris.nereids.memo.GroupExpression;
import org.apache.doris.nereids.properties.LogicalProperties;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.expressions.NamedExpression;
import org.apache.doris.nereids.trees.expressions.Slot;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.algebra.EmptyRelation;
import org.apache.doris.nereids.trees.plans.visitor.PlanVisitor;
import org.apache.doris.nereids.util.Utils;
@ -39,17 +39,17 @@ import java.util.Optional;
* e.g.
* select * from tbl limit 0
*/
public class LogicalEmptyRelation extends LogicalLeaf implements EmptyRelation, OutputPrunable {
public class LogicalEmptyRelation extends LogicalRelation implements EmptyRelation, OutputPrunable {
private final List<NamedExpression> projects;
public LogicalEmptyRelation(List<? extends NamedExpression> projects) {
this(projects, Optional.empty(), Optional.empty());
public LogicalEmptyRelation(RelationId relationId, List<? extends NamedExpression> projects) {
this(relationId, projects, Optional.empty(), Optional.empty());
}
public LogicalEmptyRelation(List<? extends NamedExpression> projects, Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties) {
super(PlanType.LOGICAL_ONE_ROW_RELATION, groupExpression, logicalProperties);
public LogicalEmptyRelation(RelationId relationId, List<? extends NamedExpression> projects,
Optional<GroupExpression> groupExpression, Optional<LogicalProperties> logicalProperties) {
super(relationId, PlanType.LOGICAL_ONE_ROW_RELATION, groupExpression, logicalProperties);
this.projects = ImmutableList.copyOf(Objects.requireNonNull(projects, "projects can not be null"));
}
@ -63,24 +63,20 @@ public class LogicalEmptyRelation extends LogicalLeaf implements EmptyRelation,
return projects;
}
@Override
public List<? extends Expression> getExpressions() {
return ImmutableList.of();
}
public LogicalEmptyRelation withProjects(List<? extends NamedExpression> projects) {
return new LogicalEmptyRelation(projects, Optional.empty(), Optional.empty());
return new LogicalEmptyRelation(relationId, projects, Optional.empty(), Optional.empty());
}
@Override
public Plan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new LogicalEmptyRelation(projects, groupExpression, Optional.of(logicalPropertiesSupplier.get()));
return new LogicalEmptyRelation(relationId, projects,
groupExpression, Optional.of(logicalPropertiesSupplier.get()));
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new LogicalEmptyRelation(projects, groupExpression, logicalProperties);
return new LogicalEmptyRelation(relationId, projects, groupExpression, logicalProperties);
}
@Override
@ -97,26 +93,6 @@ public class LogicalEmptyRelation extends LogicalLeaf implements EmptyRelation,
);
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
if (!super.equals(o)) {
return false;
}
LogicalEmptyRelation that = (LogicalEmptyRelation) o;
return Objects.equals(projects, that.projects);
}
@Override
public int hashCode() {
return Objects.hash(projects);
}
@Override
public List<NamedExpression> getOutputs() {
return projects;

View File

@ -20,9 +20,9 @@ package org.apache.doris.nereids.trees.plans.logical;
import org.apache.doris.catalog.external.ExternalTable;
import org.apache.doris.nereids.memo.GroupExpression;
import org.apache.doris.nereids.properties.LogicalProperties;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.visitor.PlanVisitor;
import org.apache.doris.nereids.util.Utils;
@ -34,19 +34,18 @@ import java.util.Optional;
/**
* Logical scan for external es catalog.
*/
public class LogicalEsScan extends LogicalRelation {
public class LogicalEsScan extends LogicalCatalogRelation {
/**
* Constructor for LogicalEsScan.
*/
public LogicalEsScan(ObjectId id, ExternalTable table, List<String> qualifier,
public LogicalEsScan(RelationId id, ExternalTable table, List<String> qualifier,
Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties) {
super(id, PlanType.LOGICAL_ES_SCAN, table, qualifier,
groupExpression, logicalProperties);
super(id, PlanType.LOGICAL_ES_SCAN, table, qualifier, groupExpression, logicalProperties);
}
public LogicalEsScan(ObjectId id, ExternalTable table, List<String> qualifier) {
public LogicalEsScan(RelationId id, ExternalTable table, List<String> qualifier) {
this(id, table, qualifier, Optional.empty(), Optional.empty());
}
@ -66,14 +65,14 @@ public class LogicalEsScan extends LogicalRelation {
@Override
public LogicalEsScan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new LogicalEsScan(id, (ExternalTable) table, qualifier, groupExpression,
return new LogicalEsScan(relationId, (ExternalTable) table, qualifier, groupExpression,
Optional.of(getLogicalProperties()));
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new LogicalEsScan(id, (ExternalTable) table, qualifier, groupExpression, logicalProperties);
return new LogicalEsScan(relationId, (ExternalTable) table, qualifier, groupExpression, logicalProperties);
}
@Override

View File

@ -21,9 +21,9 @@ import org.apache.doris.catalog.external.ExternalTable;
import org.apache.doris.nereids.memo.GroupExpression;
import org.apache.doris.nereids.properties.LogicalProperties;
import org.apache.doris.nereids.trees.expressions.Expression;
import org.apache.doris.nereids.trees.plans.ObjectId;
import org.apache.doris.nereids.trees.plans.Plan;
import org.apache.doris.nereids.trees.plans.PlanType;
import org.apache.doris.nereids.trees.plans.RelationId;
import org.apache.doris.nereids.trees.plans.visitor.PlanVisitor;
import org.apache.doris.nereids.util.Utils;
@ -31,67 +31,63 @@ import com.google.common.base.Preconditions;
import com.google.common.collect.Sets;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
/**
* Logical file scan for external catalog.
*/
public class LogicalFileScan extends LogicalRelation {
public class LogicalFileScan extends LogicalCatalogRelation {
// TODO remove this conjuncts.
private final Set<Expression> conjuncts;
/**
* Constructor for LogicalFileScan.
*/
public LogicalFileScan(ObjectId id, ExternalTable table, List<String> qualifier,
Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties,
Set<Expression> conjuncts) {
public LogicalFileScan(RelationId id, ExternalTable table, List<String> qualifier,
Optional<GroupExpression> groupExpression, Optional<LogicalProperties> logicalProperties,
Set<Expression> conjuncts) {
super(id, PlanType.LOGICAL_FILE_SCAN, table, qualifier,
groupExpression, logicalProperties);
this.conjuncts = conjuncts;
}
public LogicalFileScan(ObjectId id, ExternalTable table, List<String> qualifier) {
public LogicalFileScan(RelationId id, ExternalTable table, List<String> qualifier) {
this(id, table, qualifier, Optional.empty(), Optional.empty(), Sets.newHashSet());
}
@Override
public ExternalTable getTable() {
Preconditions.checkArgument(table instanceof ExternalTable);
Preconditions.checkArgument(table instanceof ExternalTable,
"LogicalFileScan's table must be ExternalTable, but table is " + table.getClass().getSimpleName());
return (ExternalTable) table;
}
@Override
public String toString() {
return Utils.toSqlString("LogicalFileScan",
"qualified", qualifiedName(),
"output", getOutput()
"qualified", qualifiedName(),
"output", getOutput()
);
}
@Override
public boolean equals(Object o) {
return super.equals(o) && Objects.equals(conjuncts, ((LogicalFileScan) o).conjuncts);
}
@Override
public LogicalFileScan withGroupExpression(Optional<GroupExpression> groupExpression) {
return new LogicalFileScan(id, (ExternalTable) table, qualifier, groupExpression,
return new LogicalFileScan(relationId, (ExternalTable) table, qualifier, groupExpression,
Optional.of(getLogicalProperties()), conjuncts);
}
@Override
public Plan withGroupExprLogicalPropChildren(Optional<GroupExpression> groupExpression,
Optional<LogicalProperties> logicalProperties, List<Plan> children) {
return new LogicalFileScan(id, (ExternalTable) table, qualifier, groupExpression, logicalProperties, conjuncts);
return new LogicalFileScan(relationId, (ExternalTable) table, qualifier,
groupExpression, logicalProperties, conjuncts);
}
public LogicalFileScan withConjuncts(Set<Expression> conjuncts) {
return new LogicalFileScan(id, (ExternalTable) table, qualifier, groupExpression,
Optional.of(getLogicalProperties()), conjuncts);
return new LogicalFileScan(relationId, (ExternalTable) table, qualifier, groupExpression,
Optional.of(getLogicalProperties()), conjuncts);
}
@Override

View File

@ -67,7 +67,7 @@ public class LogicalFilter<CHILD_TYPE extends Plan> extends LogicalUnary<CHILD_T
}
@Override
public List<Plan> extraPlans() {
public List<? extends Plan> extraPlans() {
return conjuncts.stream().map(Expression::children).flatMap(Collection::stream).flatMap(m -> {
if (m instanceof SubqueryExpr) {
return Stream.of(new LogicalSubQueryAlias<>(m.toSql(), ((SubqueryExpr) m).getQueryPlan()));

View File

@ -30,6 +30,7 @@ import org.apache.doris.nereids.util.Utils;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Lists;
import java.util.List;
import java.util.Objects;
@ -79,8 +80,16 @@ public class LogicalGenerate<CHILD_TYPE extends Plan> extends LogicalUnary<CHILD
return generators;
}
/**
* update generators
*/
public LogicalGenerate<Plan> withGenerators(List<Function> generators) {
return new LogicalGenerate<>(generators, generatorOutput,
Preconditions.checkArgument(generators.size() == generatorOutput.size());
List<Slot> newGeneratorOutput = Lists.newArrayList();
for (int i = 0; i < generators.size(); i++) {
newGeneratorOutput.add(generatorOutput.get(i).withNullable(generators.get(i).nullable()));
}
return new LogicalGenerate<>(generators, newGeneratorOutput,
Optional.empty(), Optional.of(getLogicalProperties()), child());
}

Some files were not shown because too many files have changed in this diff Show More