Modify some bad link in docs. (#9078)

Modify some bad link in docs.
This commit is contained in:
smallhibiscus
2022-04-18 13:29:22 +08:00
committed by GitHub
parent 9051ed7c7d
commit dffd8513c6
17 changed files with 96 additions and 30 deletions

View File

@ -206,4 +206,4 @@ It is recommended to import the new and old clusters in parallel for a period of
## More Help
For more detailed syntax and best practices used by BACKUP, please refer to the [BACKUP](../../sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/BACKUP.html) command manual, You can also type `HELP BACKUP` on the MySql client command line for more help.
For more detailed syntax and best practices used by BACKUP, please refer to the [BACKUP](../../sql-manual/sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/BACKUP.html) command manual, You can also type `HELP BACKUP` on the MySql client command line for more help.

View File

@ -50,4 +50,4 @@ RECOVER PARTITION p1 FROM example_tbl;
## More Help
For more detailed syntax and best practices used by RECOVER, please refer to the [RECOVER](../../sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RECOVER.html) command manual, You can also type `HELP RECOVER` on the MySql client command line for more help.
For more detailed syntax and best practices used by RECOVER, please refer to the [RECOVER](../../sql-manual/sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RECOVER.html) command manual, You can also type `HELP RECOVER` on the MySql client command line for more help.

View File

@ -126,7 +126,7 @@ The restore operation needs to specify an existing backup in the remote warehous
1 row in set (0.01 sec)
```
For detailed usage of RESTORE, please refer to [here](../../sql-manual/sql-reference-v2/Show-Statements/RESTORE.html).
For detailed usage of RESTORE, please refer to [here](../../sql-manual/sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RESTORE.html).
## Related Commands
@ -180,4 +180,4 @@ The commands related to the backup and restore function are as follows. For the
## More Help
For more detailed syntax and best practices used by RESTORE, please refer to the [RESTORE](../../sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RESTORE.html) command manual, You can also type `HELP RESTORE` on the MySql client command line for more help.
For more detailed syntax and best practices used by RESTORE, please refer to the [RESTORE](../../sql-manual/sql-reference-v2/Data-Definition-Statements/Backup-and-Restore/RESTORE.html) command manual, You can also type `HELP RESTORE` on the MySql client command line for more help.

View File

@ -29,7 +29,7 @@ under the License.
In version 0.14, Doris supports atomic replacement of two tables.
This operation only applies to OLAP tables.
For partition level replacement operations, please refer to [Temporary Partition Document](./alter-table-temp-partition.md)
For partition level replacement operations, please refer to [Temporary Partition Document](../partition/table-temp-partition.html)
## Syntax
@ -69,4 +69,4 @@ If `swap` is `false`, the operation is as follows:
1. Atomic Overwrite Operation
In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, please refer to [Temporary partition document](./alter-table-temp-partition.md)
In some cases, the user wants to be able to rewrite the data of a certain table, but if it is dropped and then imported, there will be a period of time in which the data cannot be viewed. At this time, the user can first use the `CREATE TABLE LIKE` statement to create a new table with the same structure, import the new data into the new table, and replace the old table atomically through the replacement operation to achieve the goal. For partition level atomic overwrite operation, please refer to [Temporary partition document](../partition/table-temp-partition.html)

View File

@ -97,20 +97,20 @@ TransactionId: 10023
* JobId: A unique ID for each Schema Change job.
* TableName: The table name of the base table corresponding to Schema Change.
* CreateTime: Job creation time.
* FinishedTime: The end time of the job. If it is not finished, "N / A" is displayed.
* FinishedTime: The end time of the job. If it is not finished, "N/A" is displayed.
* IndexName: The name of an Index involved in this modification.
* IndexId: The unique ID of the new Index.
* OriginIndexId: The unique ID of the old Index.
* SchemaVersion: Displayed in M: N format. M is the version of this Schema Change, and N is the corresponding hash value. With each Schema Change, the version is incremented.
* TransactionId: the watershed transaction ID of the conversion history data.
* State: The phase of the operation.
    * PENDING: The job is waiting in the queue to be scheduled.
    * WAITING_TXN: Wait for the import task before the watershed transaction ID to complete.
        * RUNNING: Historical data conversion.
        * FINISHED: The operation was successful.
            * CANCELLED: The job failed.
* PENDING: The job is waiting in the queue to be scheduled.
* WAITING_TXN: Wait for the import task before the watershed transaction ID to complete.
* RUNNING: Historical data conversion.
* FINISHED: The operation was successful.
* CANCELLED: The job failed.
* Msg: If the job fails, a failure message is displayed here.
* Progress: operation progress. Progress is displayed only in the RUNNING state. Progress is displayed in M ​​/ N. Where N is the total number of copies involved in the Schema Change. M is the number of copies of historical data conversion completed.
* Progress: operation progress. Progress is displayed only in the RUNNING state. Progress is displayed in M/N. Where N is the total number of copies involved in the Schema Change. M is the number of copies of historical data conversion completed.
* Timeout: Job timeout time. Unit of second.
## Cancel Job

View File

@ -129,4 +129,4 @@ Because the file meta-information and content are stored in FE memory. So by def
## More Help
For more detailed syntax and best practices used by the file manager, see [CREATE FILE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-FILE.html), [DROP FILE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Drop/DROP-FILE.html) and [SHOW FILE](../sql-manual/sql-reference-v2 /Show-Statements/SHOW-FILE.md) command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and `HELP SHOW FILE` in the MySql client command line to get more help information.
For more detailed syntax and best practices used by the file manager, see [CREATE FILE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-FILE.html), [DROP FILE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Drop/DROP-FILE.html) and [SHOW FILE](../sql-manual/sql-reference-v2/Show-Statements/SHOW-FILE.md) command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and `HELP SHOW FILE` in the MySql client command line to get more help information.

View File

@ -82,7 +82,7 @@ Hdfs load creates an import statement. The import method is basically the same a
3. Check import status
Broker load is an asynchronous import method. The specific import results can be accessed through [SHOW LOAD](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-LOAD.html#show- load) command to view
Broker load is an asynchronous import method. The specific import results can be accessed through [SHOW LOAD](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-LOAD.html#show-load) command to view
```
mysql> show load order by createtime desc limit 1\G;

View File

@ -38,7 +38,7 @@ This document describes how to create external tables accessed through the ODBC
## create external table
For a detailed introduction to creating ODBC external tables, please refer to the [CREATE ODBC TABLE]((../../../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.html) syntax help manual.
For a detailed introduction to creating ODBC external tables, please refer to the [CREATE ODBC TABLE](../../../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-EXTERNAL-TABLE.html) syntax help manual.
Here is just an example of how to use it.

View File

@ -58,7 +58,7 @@ Accessing an SSL-authenticated Kafka cluster requires the user to provide a cert
CREATE FILE "client.pem" PROPERTIES("url" = "https://example_url/kafka-key/client.pem", "catalog" = "kafka");
````
After the upload is complete, you can view the uploaded files through the [SHOW FILES]() command.
After the upload is complete, you can view the uploaded files through the [SHOW FILES](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-FILE.html) command.
### Create a routine import job
@ -112,9 +112,9 @@ For specific commands to create routine import tasks, see [ROUTINE LOAD](../../.
### View import job status
See [SHOW ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-ROUTINE-LOAD.html for specific commands and examples for checking the status of a **job** ) command documentation.
See [SHOW ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-ROUTINE-LOAD.html) for specific commands and examples for checking the status of a **job** ) command documentation.
See [SHOW ROUTINE LOAD TASK](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW -ROUTINE-LOAD-TASK.html) command documentation.
See [SHOW ROUTINE LOAD TASK](../../../sql-manual/sql-reference-v2/Show-Statements/SHOW-ROUTINE-LOAD-TASK.html) command documentation.
Only the currently running tasks can be viewed, and the completed and unstarted tasks cannot be viewed.
@ -126,8 +126,8 @@ The user can modify some properties of the job that has been created. For detail
The user can control the stop, pause and restart of the job through the `STOP/PAUSE/RESUME` three commands.
For specific commands, please refer to [STOP ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/STOP-ROUTINE-LOAD.html) , [PAUSE ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/PAUSE-ROUTINE-LOAD.html), [RESUME ROUTINE LOAD](../../ ../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/RESUME-ROUTINE-LOAD.html) command documentation.
For specific commands, please refer to [STOP ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/STOP-ROUTINE-LOAD.html) , [PAUSE ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/PAUSE-ROUTINE-LOAD.html), [RESUME ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/RESUME-ROUTINE-LOAD.html) command documentation.
## more help
For more detailed syntax and best practices for ROUTINE LOAD, see [ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/CREATE-ROUTINE -LOAD.html) command manual.
For more detailed syntax and best practices for ROUTINE LOAD, see [ROUTINE LOAD](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/CREATE-ROUTINE-LOAD.html) command manual.

View File

@ -434,4 +434,4 @@ Currently the Profile can only be viewed after the job has been successfully exe
## more help
For more detailed syntax and best practices used by Broker Load, see [Broker Load](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/BROKER- LOAD.html) command manual, you can also enter `HELP BROKER LOAD` in the MySql client command line for more help information.
For more detailed syntax and best practices used by Broker Load, see [Broker Load](../../../sql-manual/sql-reference-v2/Data-Manipulation-Statements/Load/BROKER-LOAD.html) command manual, you can also enter `HELP BROKER LOAD` in the MySql client command line for more help information.

View File

@ -232,7 +232,7 @@ Accessing the SSL-certified Kafka cluster requires the user to provide a certifi
### Viewing the status of the load job
Specific commands and examples for viewing the status of the ** job** can be viewed with the `HELP SHOW ROUTINE LOAD;` command.
Specific commands and examples for viewing the status of the **job** can be viewed with the `HELP SHOW ROUTINE LOAD;` command.
Specific commands and examples for viewing the **Task** status can be viewed with the `HELP SHOW ROUTINE LOAD TASK;` command.

View File

@ -312,7 +312,7 @@ Timeout = 1000s -31561;. 20110G / 10M /s
```
### Complete examples
Data situation: In the local disk path / home / store_sales of the sending and importing requester, the imported data is about 15G, and it is hoped to be imported into the table store\_sales of the database bj_sales.
Data situation: In the local disk path /home/store_sales of the sending and importing requester, the imported data is about 15G, and it is hoped to be imported into the table store\_sales of the database bj_sales.
Cluster situation: The concurrency of Stream load is not affected by cluster size.

View File

@ -129,7 +129,7 @@ Doris stores the data in an orderly manner, and builds a sparse index for Doris
Sparse index chooses fixed length prefix in schema as index content, and Doris currently chooses 36 bytes prefix as index.
* When building tables, it is suggested that the common filter fields in queries should be placed in front of Schema. The more distinguishable the query fields are, the more frequent the query fields are.
* One particular feature of this is the varchar type field. The varchar type field can only be used as the last field of the sparse index. The index is truncated at varchar, so if varchar appears in front, the length of the index may be less than 36 bytes. Specifically, you can refer to [data model, ROLLUP and prefix index] (. / data-model-rollup. md).
* One particular feature of this is the varchar type field. The varchar type field can only be used as the last field of the sparse index. The index is truncated at varchar, so if varchar appears in front, the length of the index may be less than 36 bytes. Specifically, you can refer to [data model](./data-model.html), [ROLLUP and query](./hit-the-rollup.html).
* In addition to sparse index, Doris also provides bloomfilter index. Bloomfilter index has obvious filtering effect on columns with high discrimination. If you consider that varchar cannot be placed in a sparse index, you can create a bloomfilter index.
### 1.5 Physical and Chemical View (rollup)

View File

@ -332,7 +332,7 @@ It is also possible to use only one layer of partitioning. When using a layer pa
* Once the number of Buckets for a Partition is specified, it cannot be changed. Therefore, when determining the number of Buckets, you need to consider the expansion of the cluster in advance. For example, there are currently only 3 hosts, and each host has 1 disk. If the number of Buckets is only set to 3 or less, then even if you add more machines later, you can't increase the concurrency.
* Give some examples: Suppose there are 10 BEs, one for each BE disk. If the total size of a table is 500MB, you can consider 4-8 shards. 5GB: 8-16. 50GB: 32. 500GB: Recommended partitions, each partition is about 50GB in size, with 16-32 shards per partition. 5TB: Recommended partitions, each with a size of around 50GB and 16-32 shards per partition.
> Note: The amount of data in the table can be viewed by the `[show data](../sql-manual/sql-reference-v2/Show-Statements/SHOW-DATA.html)` command. The result is divided by the number of copies, which is the amount of data in the table.
> Note: The amount of data in the table can be viewed by the [show data](../sql-manual/sql-reference-v2/Show-Statements/SHOW-DATA.html) command. The result is divided by the number of copies, which is the amount of data in the table.
#### Compound Partitions vs Single Partitions
@ -352,7 +352,7 @@ The user can also use a single partition without using composite partitions. The
### PROPERTIES
In the last PROPERTIES of the table building statement, for the relevant parameters that can be set in PROPERTIES, we can check [CREATE TABLE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-TABLE .html) for a detailed introduction.
In the last PROPERTIES of the table building statement, for the relevant parameters that can be set in PROPERTIES, we can check [CREATE TABLE](../sql-manual/sql-reference-v2/Data-Definition-Statements/Create/CREATE-TABLE.html) for a detailed introduction.
### ENGINE

View File

@ -44,7 +44,7 @@ Because Uniq is only a special case of the Aggregate model, we do not distinguis
Example 1: Get the total consumption per user
Following [Data Model Aggregate Model](data-model.html#Aggregate Model) in the **Aggregate Model** section, the Base table structure is as follows:
Following [Data Model Aggregate Model](./data-model.html) in the **Aggregate Model** section, the Base table structure is as follows:
| ColumnName | Type | AggregationType | Comment |
|-------------------| ------------ | --------------- | -------------------------------------- |
@ -128,7 +128,7 @@ Doris automatically hits the ROLLUP table.
#### ROLLUP in Duplicate Model
Because the Duplicate model has no aggregate semantics. So the ROLLLUP in this model has lost the meaning of "scroll up". It's just to adjust the column order to hit the prefix index. In the next section, we will introduce prefix index in [data model prefix index](data-model.html#prefix index), and how to use ROLLUP to change prefix index in order to achieve better query efficiency.
Because the Duplicate model has no aggregate semantics. So the ROLLLUP in this model has lost the meaning of "scroll up". It's just to adjust the column order to hit the prefix index. In the next section, we will introduce prefix index in [data model prefix index](./data-model.html), and how to use ROLLUP to change prefix index in order to achieve better query efficiency.
## ROLLUP adjusts prefix index

View File

@ -144,7 +144,7 @@ This article is applicable to multi-platform (Win|Mac|Linux) and multi-mode (bar
> 5. At the same time, if there is a data query, you should be able to see the log that keeps scrolling, and there is a log of execute time is xxx, indicating that the BE has been started successfully and the query is normal.
> 6. You can also check whether the startup is successful through the following connection: http://be_host:be_http_port/api/health If it returns: {"status": "OK","msg": "To Be Added"}, it means the startup is successful, In other cases, there may be problems.
>
> **Note: If you can't see the startup failure information in be.INFO, maybe you can see it in be.out. **
> **Note: If you can't see the startup failure information in be.INFO, maybe you can see it in be.out.**
Register BE to FE (using MySQL-Client, you need to install it yourself)

View File

@ -0,0 +1,66 @@
---
{
"title": "SHOW-FILE",
"language": "en"
}
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
## SHOW-FILE
### Name
SHOW FILE
### Description
This statement is used to display a file created in a database
grammar:
```sql
SHOW FILE [FROM database];
````
illustrate:
````text
FileId: file ID, globally unique
DbName: the name of the database to which it belongs
Catalog: Custom Category
FileName: file name
FileSize: file size, in bytes
MD5: MD5 of the file
````
### Example
1. View the uploaded files in the database my_database
```sql
SHOW FILE FROM my_database;
````
### Keywords
SHOW, FILE
### Best Practice