When we use spark load from hive table, the function loadDataFromHiveTable will read whole hive table and then filter the data in process() if hive table have lots of partitions and history data,the load will be cost too much time and resource. So we can do filter work in loadDataFromHiveTable function when read from hive table. Co-authored-by: 杜安明 <anming.du@mihoyo.com>