I use containerized deployment of BE nodes, both using the same distributed disk. When doing data migration, the current logic will lead to errors. For example, my distributed disk has 10t and has been used by other services for 9T, at this time, it is assumed that all the 9T data is used by BE nodes
# fe-common This module is used to store some common classes of other modules. # spark-dpp This module is Spark DPP program, used for Spark Load function. Depends: fe-common # fe-core This module is the main process module of FE. Depends: fe-common, spark-dpp