Refactor the interface of create_file_reader
the file_size and mtime are merged into FileDescription, not in FileReaderOptions anymore.
Now the file handle cache can get correct file's modification time from FileDescription.
Add HdfsIO for hdfs file reader
pick from [Enhancement](multi-catalog) Add hdfs read statistics profile. #21442
Currently, there are some useless includes in the codebase. We can use a tool named include-what-you-use to optimize these includes. By using a strict include-what-you-use policy, we can get lots of benefits from it.
Follow #17586.
This PR mainly changes:
Remove env/
Remove FileUtils/FilesystemUtils
Some methods are moved to LocalFileSystem
Remove olap/file_cache
Add s3 client cache for s3 file system
In my test, the time of open s3 file can be reduced significantly
Fix cold/hot separation bug for s3 fs.
This is the last PR of #17764.
After this, all IO operation should be in io/fs.
Except for tests in #17586, I also tested some case related to fs io:
clone
concurrency query on local/s3/hdfs
load error log create and clean
disk metrics
See #17764 for details
I have tested:
- Unit test for local/s3/hdfs/broker file system: be/test/io/fs/file_system_test.cpp
- Outfile to local/s3/hdfs/broker.
- Load from local/s3/hdfs/broker.
- Query file on local/s3/hdfs/broker file system, with table value function and catalog.
- Backup/Restore with local/s3/hdfs/broker file system
Not test:
- cold & host data separation case.
Since Filesystem inherited std::enable_shared_from_this , it is dangerous to create native point of FileSystem.
To avoid this behavior, making the constructor of XxxFileSystem a private method and using the static method create(...) to get a new FileSystem object.
# Proposed changes
This PR fixed lots of issues when building from source on macOS with Apple M1 chip.
## ATTENTION
The job for supporting macOS with Apple M1 chip is too big and there are lots of unresolved issues during runtime:
1. Some errors with memory tracker occur when BE (RELEASE) starts.
2. Some UT cases fail.
...
Temporarily, the following changes are made on macOS to start BE successfully.
1. Disable memory tracker.
2. Use tcmalloc instead of jemalloc.
This PR kicks off the job. Guys who are interested in this job can continue to fix these runtime issues.
## Use case
```shell
./build.sh -j 8 --be --clean
cd output/be/bin
ulimit -n 60000
./start_be.sh --daemon
```
## Something else
It takes around _**10+**_ minutes to build BE (with prebuilt third-parties) on macOS with M1 chip. We will improve the development experience on macOS greatly when we finish the adaptation job.
* remove alpha_rowset_meta
* remove alpha rowset related codes in compaction
* remove alpha rowset related codes in RowsetMeta
* fix be ut because some ut use alpha rowsetmeta
* [Schema Change] support fast add/drop column (#49)
* [feature](schema-change) support fast schema change. coauthor: yixiutt
* [schema change] Using columns desc from fe to read data. coauthor: Lchangliang
* [feature](schema change) schema change optimize for add/drop columns.
1.add uniqueId field for class column.
2.schema change for add/drop columns directly update schema meta
Co-authored-by: yixiutt <yixiu@selectdb.com>
Co-authored-by: SWJTU-ZhangLei <1091517373@qq.com>
[Feature](schema change) fix write and add regression test (#69)
Co-authored-by: yixiutt <yixiu@selectdb.com>
[schema change] be ssupport that delete use newest schema
add delete regression test
fix regression case (#107)
tmp
[feature](schema change) light schema change exclude rollup and agg/uniq/dup key type.
[feature](schema change) fe olapTable maxUniqueId write in disk.
[feature](schema change) add rpc iface for sc add column.
[feature](schema change) add columnsDesc to TPushReq for ligtht sc.
resolve the deadlock when schema change (#124)
fix columns from fe don't has bitmap_index flag (#134)
add update/delete case
construct MATERIALIZED schema from origin schema when insert
fix not vectorized compaction coredump
use segment cache
choose newest schema by schema version when compaction (#182)
[bugfix](schema change) fix ligth schema change problem.
[feature](schema change) light schema change add alter job. (#1)
fix be ut
[bug] (schema change) unique drop key column should not light schema
change
[feature](schema change) add schema change regression-test.
fix regression test
[bugfix](schema change) fix multi alter clauses for light schema change. (#2)
[bugfix](schema change) fix multi clauses calculate column unique id (#3)
modify PushTask process (#217)
[Bugfix](schema change) fix jobId replay cause bdbje exception.
[bug](schema change) fix max col unique id repeatitive. (#232)
[optimize](schema change) modify pendingMaxColUniqueId generate rule.
fix compaction error
* fix be ut
* fix snapshot load core
fix unique_id error (#278)
[refact](fe) remove redundant code for light schema change. (#4)
[refact](fe) remove redundant code for light schema change. (#4)
format fe core
format be core
fix be ut
modify fe meta version
fix rebase error
flush schema into rowset_meta in old table
[refactor](schema change) refact fe light schema change. (#5)
delete the change of schemahash and support get max version schema
* modify for review
* fix be ut
* fix schema change test
This PR supports rowset level data upload on the BE side, so that there can be both cold data and hot data in a tablet,
and there is no necessary to prohibit loading new data to cooled tablets.
Each rowset is bound to a `FileSystem`, so that the storage layer can read and write rowsets without
perceiving the underlying filesystem.
The abstracted `RemoteFileSystem` can try local caching strategies with different granularity,
instead of caching segment files as before.
To avoid conflicts with the code in be/src/io, we temporarily put the file system related code in the be/src/io/fs directory.
In the future, `FileReader`s and `FileWriter`s should be unified.
Currently, there are 2 status code in BE, one is common/Status.h,
and the other is olap/olap_define.h called OLAPStatus.
OLAPStatus is just an enum type, it is very simple and could not save many informations,
I will unify these code to common/Status.
1. We forgot to check disk capacity when writing data.
2. TODO: the user specified disk capacity is not used now. We need to find a way to use it.
3. Avoid print too much compaction log when there is not suitable version for compaction.
As described in #8120, a large number of rowset meta remain in rocksdb, which may be generated by:
1. drop tablet
The drop tablet task itself just sets the state of the tablet meta to `SHUTDOWN`
and moves the tablet to `_shutdown_tablets` vector then the background thread
will periodically clean up the tablet in `_shutdown_tablets` (that's why even if we execute
the `drop table xx force`, the tablet may be delayed by 10min to 1 hour before it goes into the trash directory).
The regular cleanup thread in the background saves the complete tablet meta as a `.hdr` file
when deleting the tablet, and then moves it to the trash directory along with the data files.
But this process does not process the rowset meta (before doing the checkpoint of the tablet meta,
the rowset meta is stored independently in rocksdb as a key-value). So this results in a residual rowset meta.
2. clone task
The clone task may migrate back and forth between BEs, which may result in a situation
where the tablet id is the same on the BE, but the tablet uuid is different.
This leads to some rowset meta can not find the corresponding tablet, but there is no thread
to process these rowsets, and eventually lead to residual.
This is PR, I handled it in the regular cleanup thread with method `_clean_unused_rowset_metas()`.
I did not delete rowset meta along with "drop tablet" task, because "drop tablet" itself is not a synchronous operation.
It also relies on a background thread to clean up the tablet periodically.
So I put this operation in the background cleanup thread.
1. Fix the problem of BE crash caused by destruct sequence. (close#8058)
2. Add a new BE config `compaction_task_num_per_fast_disk`
This config specify the max concurrent compaction task num on fast disk(typically .SSD).
So that for high speed disk, we can execute more compaction task at same time,
to compact the data as soon as possible
3. Avoid frequent selection of unqualified tablet to perform compaction.
4. Modify some log level to reduce the log size of BE.
5. Modify some clone logic to handle error correctly.
For the first, we need to make a parameter to discribe the data is local or remote.
At then, we need to support some basic function to support the operation for remote storage.
1. replace all boost::shared_ptr to std::shared_ptr
2. replace all boost::scopted_ptr to std::unique_ptr
3. replace all boost::scoped_array to std::unique<T[]>
4. replace all boost:thread to std::thread
In version 0.13, we support a more efficient compaction logic.
This logic will maintain multiple version paths of the tablet.
This can avoid -230 errors and can also support incremental clone.
But the previous incremental clone uses the incremental rowset meta recorded in `incr_rs_meta`.
At present, the incremental rowset meta recorded in `incr_rs_meta` and the records
in `stale_rs_meta` are duplicated, and the current clone logic does not adapt to the
new multi-version path, resulting in many cases not triggering incremental clone.
This CL mainly modified:
1. Removed `incr_rs_meta` metadata
2. Modified the clone logic. When the clone is incremented, it will try to read the rowset in `stale_rs_meta`.
3. Delete a lot of code that was previously used for version compatibility.