Version v1.69.1

This commit is contained in:
Nick Craig-Wood 2025-02-14 15:17:21 +00:00
parent b63c42f39b
commit 4e77a4ff73
14 changed files with 1119 additions and 353 deletions

644
MANUAL.html generated

File diff suppressed because it is too large Load Diff

230
MANUAL.md generated
View File

@ -1,7 +1,78 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Jan 12, 2025
% Feb 14, 2025
# NAME
rclone - manage files on cloud storage
# SYNOPSIS
```
Usage:
rclone [flags]
rclone [command]
Available commands:
about Get quota information from the remote.
authorize Remote authorization.
backend Run a backend-specific command.
bisync Perform bidirectional synchronization between two paths.
cat Concatenates any files and sends them to stdout.
check Checks the files in the source and destination match.
checksum Checks the files in the destination against a SUM file.
cleanup Clean up the remote if possible.
completion Output completion script for a given shell.
config Enter an interactive configuration session.
copy Copy files from source to dest, skipping identical files.
copyto Copy files from source to dest, skipping identical files.
copyurl Copy the contents of the URL supplied content to dest:path.
cryptcheck Cryptcheck checks the integrity of an encrypted remote.
cryptdecode Cryptdecode returns unencrypted file names.
dedupe Interactively find duplicate filenames and delete/rename them.
delete Remove the files in path.
deletefile Remove a single file from remote.
gendocs Output markdown docs for rclone to the directory supplied.
gitannex Speaks with git-annex over stdin/stdout.
hashsum Produces a hashsum file for all the objects in the path.
help Show help for rclone commands, flags and backends.
link Generate public link to file/folder.
listremotes List all the remotes in the config file and defined in environment variables.
ls List the objects in the path with size and path.
lsd List all directories/containers/buckets in the path.
lsf List directories and objects in remote:path formatted for parsing.
lsjson List directories and objects in the path in JSON format.
lsl List the objects in path with modification time, size and path.
md5sum Produces an md5sum file for all the objects in the path.
mkdir Make the path if it doesn't already exist.
mount Mount the remote as file system on a mountpoint.
move Move files from source to dest.
moveto Move file or directory from source to dest.
ncdu Explore a remote with a text based user interface.
nfsmount Mount the remote as file system on a mountpoint.
obscure Obscure password for use in the rclone config file.
purge Remove the path and all of its contents.
rc Run a command against a running rclone.
rcat Copies standard input to file on remote.
rcd Run rclone listening to remote control commands only.
rmdir Remove the empty directory at path.
rmdirs Remove empty directories under the path.
selfupdate Update the rclone binary.
serve Serve a remote over a protocol.
settier Changes storage class/tier of objects in remote.
sha1sum Produces an sha1sum file for all the objects in the path.
size Prints the total size and number of objects in remote:path.
sync Make source and dest identical, modifying destination only.
test Run a test command
touch Create new file or change file modification time.
tree List the contents of the remote in a tree like fashion.
version Show the version number.
Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
```
# Rclone syncs your files to cloud storage
<img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" >
@ -1690,6 +1761,9 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](https://rclone.org/commands/rclone_rmdir/) or [rmdirs](https://rclone.org/commands/rclone_rmdirs/).
The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
implement this command directly, in which case `--checkers` will be ignored.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
@ -3745,12 +3819,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
changing passwords programatically you can use the environment
changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
easier if you don't mind the unecrypted config file being on the disk
easier if you don't mind the unencrypted config file being on the disk
briefly.
@ -4290,7 +4364,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
## Troublshooting
## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:
@ -10581,7 +10655,7 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
only. It requires running rclone as root or with `CAP_DAC_READ_SEARCH`.
You can run rclone with this extra permission by doing this to the
rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
@ -11408,7 +11482,7 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
Note that setting `disable_multipart_uploads = true` is to work around
Note that setting `use_multipart_uploads = false` is to work around
[a bug](#bugs) which will be fixed in due course.
## Bugs
@ -14444,6 +14518,11 @@ it to `false`. It is also possible to specify `--boolean=false` or
parsed as `--boolean` and the `false` is parsed as an extra command
line argument for rclone.
Options documented to take a `stringArray` parameter accept multiple
values. To pass more than one value, repeat the option; for example:
`--include value1 --include value2`.
### Time or duration options {#time-option}
TIME or DURATION options can be specified as a duration string or a
@ -16755,7 +16834,7 @@ so they take exactly the same form.
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
Options that can appear multiple times (type `stringArray`) are
treated slighly differently as environment variables can only be
treated slightly differently as environment variables can only be
defined once. In order to allow a simple mechanism for adding one or
many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/csv)
string. For example
@ -19937,7 +20016,7 @@ the `--vfs-cache-mode` is off, it will return an empty result.
],
}
The `expiry` time is the time until the file is elegible for being
The `expiry` time is the time until the file is eligible for being
uploaded in floating point seconds. This may go negative. As rclone
only transfers `--transfers` files at once, only the lowest
`--transfers` expiry times will have `uploading` as `true`. So there
@ -21018,7 +21097,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
```
@ -23066,7 +23145,7 @@ See the [bisync filters](#filtering) section and generic
[--filter-from](https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file)
documentation.
An [example filters file](#example-filters-file) contains filters for
non-allowed files for synching with Dropbox.
non-allowed files for syncing with Dropbox.
If you make changes to your filters file then bisync requires a run
with `--resync`. This is a safety feature, which prevents existing files
@ -23243,7 +23322,7 @@ Using `--check-sync=false` will disable it and may significantly reduce the
sync run times for very large numbers of files.
The check may be run manually with `--check-sync=only`. It runs only the
integrity check and terminates without actually synching.
integrity check and terminates without actually syncing.
Note that currently, `--check-sync` **only checks listing snapshots and NOT the
actual files on the remotes.** Note also that the listing snapshots will not
@ -23720,7 +23799,7 @@ The `--include*`, `--exclude*`, and `--filter` flags are also supported.
### How to filter directories
Filtering portions of the directory tree is a critical feature for synching.
Filtering portions of the directory tree is a critical feature for syncing.
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync:
@ -23829,7 +23908,7 @@ quashed by adding `--quiet` to the bisync command line.
## Example exclude-style filters files for use with Dropbox {#exclude-filters}
- Dropbox disallows synching the listed temporary and configuration/data files.
- Dropbox disallows syncing the listed temporary and configuration/data files.
The `- <filename>` filters exclude these files where ever they may occur
in the sync tree. Consider adding similar exclusions for file types
you don't need to sync, such as core dump and software build files.
@ -24163,7 +24242,7 @@ test command flags can be equally prefixed by a single `-` or double dash.
- `go test . -case basic -remote local -remote2 local`
runs the `test_basic` test case using only the local filesystem,
synching one local directory with another local directory.
syncing one local directory with another local directory.
Test script output is to the console, while commands within scenario.txt
have their output sent to the `.../workdir/test.log` file,
which is finally compared to the golden copy.
@ -24394,6 +24473,9 @@ about _Unison_ and synchronization in general.
## Changelog
### `v1.69.1`
* Fixed an issue causing listings to not capture concurrent modifications under certain conditions
### `v1.68`
* Fixed an issue affecting backends that round modtimes to a lower precision.
@ -25680,7 +25762,7 @@ Notes on above:
that `USER_NAME` has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exsits, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exists, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with `rclone sync`.
@ -28658,7 +28740,7 @@ location_constraint = au-nsw
### Rclone Serve S3 {#rclone}
Rclone can serve any remote over the S3 protocol. For details see the
[rclone serve s3](https://rclone.org/commands/rclone_serve_http/) documentation.
[rclone serve s3](https://rclone.org/commands/rclone_serve_s3/) documentation.
For example, to serve `remote:path` over s3, run the server like this:
@ -28678,8 +28760,8 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
Note that setting `disable_multipart_uploads = true` is to work around
[a bug](https://rclone.org/commands/rclone_serve_http/#bugs) which will be fixed in due course.
Note that setting `use_multipart_uploads = false` is to work around
[a bug](https://rclone.org/commands/rclone_serve_s3/#bugs) which will be fixed in due course.
### Scaleway
@ -29775,27 +29857,49 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Atlanta, GA (USA), us-southeast-1
1 / Amsterdam (Netherlands), nl-ams-1
\ (nl-ams-1.linodeobjects.com)
2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
2 / Chicago, IL (USA), us-ord-1
3 / Chennai (India), in-maa-1
\ (in-maa-1.linodeobjects.com)
4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
3 / Frankfurt (Germany), eu-central-1
5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
4 / Milan (Italy), it-mil-1
6 / Jakarta (Indonesia), id-cgk-1
\ (id-cgk-1.linodeobjects.com)
7 / London 2 (Great Britain), gb-lon-1
\ (gb-lon-1.linodeobjects.com)
8 / Los Angeles, CA (USA), us-lax-1
\ (us-lax-1.linodeobjects.com)
9 / Madrid (Spain), es-mad-1
\ (es-mad-1.linodeobjects.com)
10 / Melbourne (Australia), au-mel-1
\ (au-mel-1.linodeobjects.com)
11 / Miami, FL (USA), us-mia-1
\ (us-mia-1.linodeobjects.com)
12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
5 / Newark, NJ (USA), us-east-1
13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
6 / Paris (France), fr-par-1
14 / Osaka (Japan), jp-osa-1
\ (jp-osa-1.linodeobjects.com)
15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
7 / Seattle, WA (USA), us-sea-1
16 / São Paulo (Brazil), br-gru-1
\ (br-gru-1.linodeobjects.com)
17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
8 / Singapore ap-south-1
18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
9 / Stockholm (Sweden), se-sto-1
19 / Singapore 2, sg-sin-1
\ (sg-sin-1.linodeobjects.com)
20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
10 / Washington, DC, (USA), us-iad-1
21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
endpoint> 3
endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@ -34415,7 +34519,7 @@ strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
approximately 2×10⁻³² of re-using a nonce.
approximately 2×10⁻³² of reusing a nonce.
#### Chunk
@ -41561,7 +41665,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
[koofr]
[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@ -41578,6 +41682,20 @@ y/e/d> y
ADP is currently unsupported and need to be disabled
On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF.
## Troubleshooting
### Missing PCS cookies from the request
This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off.
You will need to clear the `cookies` and the `trust_token` fields in the config. Or you can delete the remote config and start again.
You should then run `rclone reconnect remote:`.
Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running `rclone reconnect remote:` until rclone functions properly.
### Standard options
@ -46035,7 +46153,7 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- Microsoft Cloud Germany
- Microsoft Cloud Germany (deprecated - try global region first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@ -46652,6 +46770,28 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
### Impersonate other users as Admin
Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate.
1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also changes the permissions so you your admin user has access.
2. Then in powershell run the following commands:
```console
Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
Import-Module Microsoft.Graph.Files
Connect-MgGraph -Scopes "Files.ReadWrite.All"
# Follow the steps to allow access to your admin user
# Then run this for each user you want to impersonate to get the Drive ID
Get-MgUserDefaultDrive -UserId '{emailaddress}'
# This will give you output of the format:
# Name Id DriveType CreatedDateTime
# ---- -- --------- ---------------
# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
```
3. Then in rclone add a onedrive remote type, and use the `Type in driveID` with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of `Found drive "root" of type "business"` and then include the URL of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents`
## Limitations
If you don't use rclone for 90 days the refresh token will
@ -56509,6 +56649,32 @@ Options:
# Changelog
## v1.69.1 - 2025-02-14
[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
* Bug Fixes
* lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
* bisync: Fix listings missing concurrent modifications (nielash)
* serve s3: Fix list objects encoding-type (Nick Craig-Wood)
* fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
* doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
* build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
* VFS
* Fix the cache failing to upload symlinks when `--links` was specified (Nick Craig-Wood)
* Fix race detected by race detector (Nick Craig-Wood)
* Close the change notify channel on Shutdown (izouxv)
* B2
* Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
* Iclouddrive
* Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
* Onedrive
* Mark German (de) region as deprecated (Nick Craig-Wood)
* S3
* Added new storage class to magalu provider (Bruno Fernandes)
* Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
* Add latest Linode Object Storage endpoints (jbagwell-akamai)
## v1.69.0 - 2025-01-12
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)

253
MANUAL.txt generated
View File

@ -1,6 +1,75 @@
rclone(1) User Manual
Nick Craig-Wood
Jan 12, 2025
Feb 14, 2025
NAME
rclone - manage files on cloud storage
SYNOPSIS
Usage:
rclone [flags]
rclone [command]
Available commands:
about Get quota information from the remote.
authorize Remote authorization.
backend Run a backend-specific command.
bisync Perform bidirectional synchronization between two paths.
cat Concatenates any files and sends them to stdout.
check Checks the files in the source and destination match.
checksum Checks the files in the destination against a SUM file.
cleanup Clean up the remote if possible.
completion Output completion script for a given shell.
config Enter an interactive configuration session.
copy Copy files from source to dest, skipping identical files.
copyto Copy files from source to dest, skipping identical files.
copyurl Copy the contents of the URL supplied content to dest:path.
cryptcheck Cryptcheck checks the integrity of an encrypted remote.
cryptdecode Cryptdecode returns unencrypted file names.
dedupe Interactively find duplicate filenames and delete/rename them.
delete Remove the files in path.
deletefile Remove a single file from remote.
gendocs Output markdown docs for rclone to the directory supplied.
gitannex Speaks with git-annex over stdin/stdout.
hashsum Produces a hashsum file for all the objects in the path.
help Show help for rclone commands, flags and backends.
link Generate public link to file/folder.
listremotes List all the remotes in the config file and defined in environment variables.
ls List the objects in the path with size and path.
lsd List all directories/containers/buckets in the path.
lsf List directories and objects in remote:path formatted for parsing.
lsjson List directories and objects in the path in JSON format.
lsl List the objects in path with modification time, size and path.
md5sum Produces an md5sum file for all the objects in the path.
mkdir Make the path if it doesn't already exist.
mount Mount the remote as file system on a mountpoint.
move Move files from source to dest.
moveto Move file or directory from source to dest.
ncdu Explore a remote with a text based user interface.
nfsmount Mount the remote as file system on a mountpoint.
obscure Obscure password for use in the rclone config file.
purge Remove the path and all of its contents.
rc Run a command against a running rclone.
rcat Copies standard input to file on remote.
rcd Run rclone listening to remote control commands only.
rmdir Remove the empty directory at path.
rmdirs Remove empty directories under the path.
selfupdate Update the rclone binary.
serve Serve a remote over a protocol.
settier Changes storage class/tier of objects in remote.
sha1sum Produces an sha1sum file for all the objects in the path.
size Prints the total size and number of objects in remote:path.
sync Make source and dest identical, modifying destination only.
test Run a test command
touch Create new file or change file modification time.
tree List the contents of the remote in a tree like fashion.
version Show the version number.
Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Rclone syncs your files to cloud storage
@ -1600,6 +1669,10 @@ include/exclude filters - everything will be removed. Use the delete
command if you want to selectively delete files. To delete empty
directories only, use command rmdir or rmdirs.
The concurrency of this operation is controlled by the --checkers global
flag. However, some backends will implement this command directly, in
which case --checkers will be ignored.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive/-i flag.
@ -3467,12 +3540,12 @@ re-encrypt the config.
When --password-command is called to change the password then the
environment variable RCLONE_PASSWORD_CHANGE=1 will be set. So if
changing passwords programatically you can use the environment variable
changing passwords programmatically you can use the environment variable
to distinguish which password you must supply.
Alternatively you can remove the password first (with
rclone config encryption remove), then set it again with this command
which may be easier if you don't mind the unecrypted config file being
which may be easier if you don't mind the unencrypted config file being
on the disk briefly.
rclone config encryption set [flags]
@ -3949,7 +4022,7 @@ there is one with the same name.
Setting --stdout or making the output file name - will cause the output
to be written to standard output.
Troublshooting
Troubleshooting
If you can't get rclone copyurl to work then here are some things you
can try:
@ -10102,7 +10175,7 @@ uses an on disk cache, but the cache entries are held as symlinks.
Rclone will use the handle of the underlying file as the NFS handle
which improves performance. This sort of cache can't be backed up and
restored as the underlying handles will change. This is Linux only. It
requres running rclone as root or with CAP_DAC_READ_SEARCH. You can run
requires running rclone as root or with CAP_DAC_READ_SEARCH. You can run
rclone with this extra permission by doing this to the rclone binary
sudo setcap cap_dac_read_search+ep /path/to/rclone.
@ -10903,8 +10976,8 @@ which is defined like this:
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
Note that setting disable_multipart_uploads = true is to work around a
bug which will be fixed in due course.
Note that setting use_multipart_uploads = false is to work around a bug
which will be fixed in due course.
Bugs
@ -13895,6 +13968,10 @@ also possible to specify --boolean=false or --boolean=true. Note that
--boolean false is not valid - this is parsed as --boolean and the false
is parsed as an extra command line argument for rclone.
Options documented to take a stringArray parameter accept multiple
values. To pass more than one value, repeat the option; for example:
--include value1 --include value2.
Time or duration options
TIME or DURATION options can be specified as a duration string or a time
@ -16177,7 +16254,7 @@ The options set by environment variables can be seen with the -vv flag,
e.g. rclone version -vv.
Options that can appear multiple times (type stringArray) are treated
slighly differently as environment variables can only be defined once.
slightly differently as environment variables can only be defined once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded string. For example
@ -19420,7 +19497,7 @@ This is only useful if --vfs-cache-mode > off. If you call it when the
],
}
The expiry time is the time until the file is elegible for being
The expiry time is the time until the file is eligible for being
uploaded in floating point seconds. This may go negative. As rclone only
transfers --transfers files at once, only the lowest --transfers expiry
times will have uploading as true. So there may be files with negative
@ -20569,7 +20646,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
Performance
@ -22531,7 +22608,7 @@ Also see the all files changed check.
By using rclone filter features you can exclude file types or directory
sub-trees from the sync. See the bisync filters section and generic
--filter-from documentation. An example filters file contains filters
for non-allowed files for synching with Dropbox.
for non-allowed files for syncing with Dropbox.
If you make changes to your filters file then bisync requires a run with
--resync. This is a safety feature, which prevents existing files on the
@ -22704,7 +22781,7 @@ of a sync. Using --check-sync=false will disable it and may
significantly reduce the sync run times for very large numbers of files.
The check may be run manually with --check-sync=only. It runs only the
integrity check and terminates without actually synching.
integrity check and terminates without actually syncing.
Note that currently, --check-sync only checks listing snapshots and NOT
the actual files on the remotes. Note also that the listing snapshots
@ -23237,7 +23314,7 @@ supported.
How to filter directories
Filtering portions of the directory tree is a critical feature for
synching.
syncing.
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync: - Directory trees containing
@ -23348,7 +23425,7 @@ This noise can be quashed by adding --quiet to the bisync command line.
Example exclude-style filters files for use with Dropbox
- Dropbox disallows synching the listed temporary and
- Dropbox disallows syncing the listed temporary and
configuration/data files. The `- ` filters exclude these files where
ever they may occur in the sync tree. Consider adding similar
exclusions for file types you don't need to sync, such as core dump
@ -23668,7 +23745,7 @@ dash.
Running tests
- go test . -case basic -remote local -remote2 local runs the
test_basic test case using only the local filesystem, synching one
test_basic test case using only the local filesystem, syncing one
local directory with another local directory. Test script output is
to the console, while commands within scenario.txt have their output
sent to the .../workdir/test.log file, which is finally compared to
@ -23901,6 +23978,11 @@ Unison and synchronization in general.
Changelog
v1.69.1
- Fixed an issue causing listings to not capture concurrent
modifications under certain conditions
v1.68
- Fixed an issue affecting backends that round modtimes to a lower
@ -25192,7 +25274,7 @@ Notes on above:
that USER_NAME has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
3. When using s3-no-check-bucket and the bucket already exsits, the
3. When using s3-no-check-bucket and the bucket already exists, the
"arn:aws:s3:::BUCKET_NAME" doesn't have to be included.
For reference, here's an Ansible script that will generate one or more
@ -28155,8 +28237,8 @@ this:
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
Note that setting disable_multipart_uploads = true is to work around a
bug which will be fixed in due course.
Note that setting use_multipart_uploads = false is to work around a bug
which will be fixed in due course.
Scaleway
@ -29203,27 +29285,49 @@ This will guide you through an interactive setup process.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Atlanta, GA (USA), us-southeast-1
1 / Amsterdam (Netherlands), nl-ams-1
\ (nl-ams-1.linodeobjects.com)
2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
2 / Chicago, IL (USA), us-ord-1
3 / Chennai (India), in-maa-1
\ (in-maa-1.linodeobjects.com)
4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
3 / Frankfurt (Germany), eu-central-1
5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
4 / Milan (Italy), it-mil-1
6 / Jakarta (Indonesia), id-cgk-1
\ (id-cgk-1.linodeobjects.com)
7 / London 2 (Great Britain), gb-lon-1
\ (gb-lon-1.linodeobjects.com)
8 / Los Angeles, CA (USA), us-lax-1
\ (us-lax-1.linodeobjects.com)
9 / Madrid (Spain), es-mad-1
\ (es-mad-1.linodeobjects.com)
10 / Melbourne (Australia), au-mel-1
\ (au-mel-1.linodeobjects.com)
11 / Miami, FL (USA), us-mia-1
\ (us-mia-1.linodeobjects.com)
12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
5 / Newark, NJ (USA), us-east-1
13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
6 / Paris (France), fr-par-1
14 / Osaka (Japan), jp-osa-1
\ (jp-osa-1.linodeobjects.com)
15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
7 / Seattle, WA (USA), us-sea-1
16 / São Paulo (Brazil), br-gru-1
\ (br-gru-1.linodeobjects.com)
17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
8 / Singapore ap-south-1
18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
9 / Stockholm (Sweden), se-sto-1
19 / Singapore 2, sg-sin-1
\ (sg-sin-1.linodeobjects.com)
20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
10 / Washington, DC, (USA), us-iad-1
21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
endpoint> 3
endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@ -33757,7 +33861,7 @@ The initial nonce is generated from the operating systems crypto strong
random number generator. The nonce is incremented for each chunk read
making sure each nonce is unique for each block written. The chance of a
nonce being reused is minuscule. If you wrote an exabyte of data (10¹⁸
bytes) you would have a probability of approximately 2×10⁻³² of re-using
bytes) you would have a probability of approximately 2×10⁻³² of reusing
a nonce.
Chunk
@ -40978,7 +41082,7 @@ This will guide you through an interactive setup process:
config_2fa> 2FACODE
Remote config
--------------------
[koofr]
[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@ -40994,6 +41098,27 @@ Advanced Data Protection
ADP is currently unsupported and need to be disabled
On iPhone, Settings > Apple Account > iCloud > 'Access iCloud Data on
the Web' must be ON, and 'Advanced Data Protection' OFF.
Troubleshooting
Missing PCS cookies from the request
This means you have Advanced Data Protection (ADP) turned on. This is
not supported at the moment. If you want to use rclone you will have to
turn it off. See above for how to turn it off.
You will need to clear the cookies and the trust_token fields in the
config. Or you can delete the remote config and start again.
You should then run rclone reconnect remote:.
Note that changing the ADP setting may not take effect immediately - you
may need to wait a few hours or a day before you can get rclone to work
- keep clearing the config entry and running rclone reconnect remote:
until rclone functions properly.
Standard options
Here are the Standard options specific to iclouddrive (iCloud Drive).
@ -45589,7 +45714,8 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- Microsoft Cloud Germany
- Microsoft Cloud Germany (deprecated - try global region
first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@ -46248,6 +46374,38 @@ Here are the possible system metadata items for the onedrive backend.
See the metadata docs for more info.
Impersonate other users as Admin
Unlike Google Drive and impersonating any domain user via service
accounts, OneDrive requires you to authenticate as an admin account, and
manually setup a remote per user you wish to impersonate.
1. In Microsoft 365 Admin Center, open each user you need to
"impersonate" and go to the OneDrive section. There is a heading
called "Get access to files", you need to click to create the link,
this creates the link of the format
https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/
but also changes the permissions so you your admin user has access.
2. Then in powershell run the following commands:
Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
Import-Module Microsoft.Graph.Files
Connect-MgGraph -Scopes "Files.ReadWrite.All"
# Follow the steps to allow access to your admin user
# Then run this for each user you want to impersonate to get the Drive ID
Get-MgUserDefaultDrive -UserId '{emailaddress}'
# This will give you output of the format:
# Name Id DriveType CreatedDateTime
# ---- -- --------- ---------------
# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
3. Then in rclone add a onedrive remote type, and use the
Type in driveID with the DriveID you got in the previous step. One
remote per user. It will then confirm the drive ID, and hopefully
give you a message of Found drive "root" of type "business" and then
include the URL of the format
https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents
Limitations
If you don't use rclone for 90 days the refresh token will expire. This
@ -56157,6 +56315,37 @@ Options:
Changelog
v1.69.1 - 2025-02-14
See commits
- Bug Fixes
- lib/oauthutil: Fix redirect URL mismatch errors (Nick
Craig-Wood)
- bisync: Fix listings missing concurrent modifications (nielash)
- serve s3: Fix list objects encoding-type (Nick Craig-Wood)
- fs: Fix confusing "didn't find section in config file" error
(Nick Craig-Wood)
- doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt
Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
- build: Added parallel docker builds and caching for go build in
the container (Anagh Kumar Baranwal)
- VFS
- Fix the cache failing to upload symlinks when --links was
specified (Nick Craig-Wood)
- Fix race detected by race detector (Nick Craig-Wood)
- Close the change notify channel on Shutdown (izouxv)
- B2
- Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
- Iclouddrive
- Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
- Onedrive
- Mark German (de) region as deprecated (Nick Craig-Wood)
- S3
- Added new storage class to magalu provider (Bruno Fernandes)
- Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
- Add latest Linode Object Storage endpoints (jbagwell-akamai)
v1.69.0 - 2025-01-12
See commits

View File

@ -5,6 +5,32 @@ description: "Rclone Changelog"
# Changelog
## v1.69.1 - 2025-02-14
[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
* Bug Fixes
* lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
* bisync: Fix listings missing concurrent modifications (nielash)
* serve s3: Fix list objects encoding-type (Nick Craig-Wood)
* fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
* doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
* build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
* VFS
* Fix the cache failing to upload symlinks when `--links` was specified (Nick Craig-Wood)
* Fix race detected by race detector (Nick Craig-Wood)
* Close the change notify channel on Shutdown (izouxv)
* B2
* Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
* Iclouddrive
* Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
* Onedrive
* Mark German (de) region as deprecated (Nick Craig-Wood)
* S3
* Added new storage class to magalu provider (Bruno Fernandes)
* Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
* Add latest Linode Object Storage endpoints (jbagwell-akamai)
## v1.69.0 - 2025-01-12
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)

View File

@ -965,7 +965,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect

View File

@ -21,12 +21,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
changing passwords programatically you can use the environment
changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
easier if you don't mind the unecrypted config file being on the disk
easier if you don't mind the unencrypted config file being on the disk
briefly.

View File

@ -28,7 +28,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
## Troublshooting
## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:

View File

@ -15,6 +15,9 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/).
The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
implement this command directly, in which case `--checkers` will be ignored.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.

View File

@ -7,8 +7,6 @@ versionIntroduced: v1.65
---
# rclone serve nfs
*Not available in Windows.*
Serve the remote as an NFS mount
## Synopsis
@ -55,7 +53,7 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
only. It requires running rclone as root or with `CAP_DAC_READ_SEARCH`.
You can run rclone with this extra permission by doing this to the
rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.

View File

@ -82,7 +82,7 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
Note that setting `disable_multipart_uploads = true` is to work around
Note that setting `use_multipart_uploads = false` is to work around
[a bug](#bugs) which will be fixed in due course.
## Bugs

View File

@ -9,8 +9,6 @@ description: "Rclone Global Flags"
This describes the global flags available to every rclone command
split into groups.
See the [Options section](/docs/#options) for syntax and usage advice.
## Copy
@ -118,7 +116,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
```

View File

@ -384,7 +384,7 @@ Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren't full
resolution, and/or have EXIF data missing.
However if you use the gphotosdl proxy then you can download original,
However if you ue the gphotosdl proxy tnen you can download original,
unchanged images.
This runs a headless browser in the background.

View File

@ -319,7 +319,7 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- Microsoft Cloud Germany
- Microsoft Cloud Germany (deprecated - try global region first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China

294
rclone.1 generated
View File

@ -1,8 +1,79 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
.TH "rclone" "1" "Jan 12, 2025" "User Manual" ""
.TH "rclone" "1" "Feb 14, 2025" "User Manual" ""
.hy
.SH NAME
.PP
rclone - manage files on cloud storage
.SH SYNOPSIS
.IP
.nf
\f[C]
Usage:
rclone [flags]
rclone [command]
Available commands:
about Get quota information from the remote.
authorize Remote authorization.
backend Run a backend-specific command.
bisync Perform bidirectional synchronization between two paths.
cat Concatenates any files and sends them to stdout.
check Checks the files in the source and destination match.
checksum Checks the files in the destination against a SUM file.
cleanup Clean up the remote if possible.
completion Output completion script for a given shell.
config Enter an interactive configuration session.
copy Copy files from source to dest, skipping identical files.
copyto Copy files from source to dest, skipping identical files.
copyurl Copy the contents of the URL supplied content to dest:path.
cryptcheck Cryptcheck checks the integrity of an encrypted remote.
cryptdecode Cryptdecode returns unencrypted file names.
dedupe Interactively find duplicate filenames and delete/rename them.
delete Remove the files in path.
deletefile Remove a single file from remote.
gendocs Output markdown docs for rclone to the directory supplied.
gitannex Speaks with git-annex over stdin/stdout.
hashsum Produces a hashsum file for all the objects in the path.
help Show help for rclone commands, flags and backends.
link Generate public link to file/folder.
listremotes List all the remotes in the config file and defined in environment variables.
ls List the objects in the path with size and path.
lsd List all directories/containers/buckets in the path.
lsf List directories and objects in remote:path formatted for parsing.
lsjson List directories and objects in the path in JSON format.
lsl List the objects in path with modification time, size and path.
md5sum Produces an md5sum file for all the objects in the path.
mkdir Make the path if it doesn\[aq]t already exist.
mount Mount the remote as file system on a mountpoint.
move Move files from source to dest.
moveto Move file or directory from source to dest.
ncdu Explore a remote with a text based user interface.
nfsmount Mount the remote as file system on a mountpoint.
obscure Obscure password for use in the rclone config file.
purge Remove the path and all of its contents.
rc Run a command against a running rclone.
rcat Copies standard input to file on remote.
rcd Run rclone listening to remote control commands only.
rmdir Remove the empty directory at path.
rmdirs Remove empty directories under the path.
selfupdate Update the rclone binary.
serve Serve a remote over a protocol.
settier Changes storage class/tier of objects in remote.
sha1sum Produces an sha1sum file for all the objects in the path.
size Prints the total size and number of objects in remote:path.
sync Make source and dest identical, modifying destination only.
test Run a test command
touch Create new file or change file modification time.
tree List the contents of the remote in a tree like fashion.
version Show the version number.
Use \[dq]rclone [command] --help\[dq] for more information about a command.
Use \[dq]rclone help flags\[dq] for to see the global flags.
Use \[dq]rclone help backends\[dq] for a list of supported services.
\f[R]
.fi
.SH Rclone syncs your files to cloud storage
.PP
.IP \[bu] 2
@ -2238,6 +2309,11 @@ To delete empty directories only, use command
rmdir (https://rclone.org/commands/rclone_rmdir/) or
rmdirs (https://rclone.org/commands/rclone_rmdirs/).
.PP
The concurrency of this operation is controlled by the
\f[C]--checkers\f[R] global flag.
However, some backends will implement this command directly, in which
case \f[C]--checkers\f[R] will be ignored.
.PP
\f[B]Important\f[R]: Since this can cause data loss, test first with the
\f[C]--dry-run\f[R] or the \f[C]--interactive\f[R]/\f[C]-i\f[R] flag.
.IP
@ -4652,12 +4728,12 @@ password to re-encrypt the config.
.PP
When \f[C]--password-command\f[R] is called to change the password then
the environment variable \f[C]RCLONE_PASSWORD_CHANGE=1\f[R] will be set.
So if changing passwords programatically you can use the environment
So if changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
.PP
Alternatively you can remove the password first (with
\f[C]rclone config encryption remove\f[R]), then set it again with this
command which may be easier if you don\[aq]t mind the unecrypted config
command which may be easier if you don\[aq]t mind the unencrypted config
file being on the disk briefly.
.IP
.nf
@ -5273,7 +5349,7 @@ destination if there is one with the same name.
.PP
Setting \f[C]--stdout\f[R] or making the output file name \f[C]-\f[R]
will cause the output to be written to standard output.
.SS Troublshooting
.SS Troubleshooting
.PP
If you can\[aq]t get \f[C]rclone copyurl\f[R] to work then here are some
things you can try:
@ -12993,7 +13069,8 @@ which improves performance.
This sort of cache can\[aq]t be backed up and restored as the underlying
handles will change.
This is Linux only.
It requres running rclone as root or with \f[C]CAP_DAC_READ_SEARCH\f[R].
It requires running rclone as root or with
\f[C]CAP_DAC_READ_SEARCH\f[R].
You can run rclone with this extra permission by doing this to the
rclone binary
\f[C]sudo setcap cap_dac_read_search+ep /path/to/rclone\f[R].
@ -13973,7 +14050,7 @@ use_multipart_uploads = false
\f[R]
.fi
.PP
Note that setting \f[C]disable_multipart_uploads = true\f[R] is to work
Note that setting \f[C]use_multipart_uploads = false\f[R] is to work
around a bug which will be fixed in due course.
.SS Bugs
.PP
@ -17806,6 +17883,11 @@ It is also possible to specify \f[C]--boolean=false\f[R] or
Note that \f[C]--boolean false\f[R] is not valid - this is parsed as
\f[C]--boolean\f[R] and the \f[C]false\f[R] is parsed as an extra
command line argument for rclone.
.PP
Options documented to take a \f[C]stringArray\f[R] parameter accept
multiple values.
To pass more than one value, repeat the option; for example:
\f[C]--include value1 --include value2\f[R].
.SS Time or duration options
.PP
TIME or DURATION options can be specified as a duration string or a time
@ -20455,8 +20537,8 @@ The options set by environment variables can be seen with the
\f[C]rclone version -vv\f[R].
.PP
Options that can appear multiple times (type \f[C]stringArray\f[R]) are
treated slighly differently as environment variables can only be defined
once.
treated slightly differently as environment variables can only be
defined once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded (https://godoc.org/encoding/csv)
string.
@ -24731,7 +24813,7 @@ return an empty result.
\f[R]
.fi
.PP
The \f[C]expiry\f[R] time is the time until the file is elegible for
The \f[C]expiry\f[R] time is the time until the file is eligible for
being uploaded in floating point seconds.
This may go negative.
As rclone only transfers \f[C]--transfers\f[R] files at once, only the
@ -28442,7 +28524,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.69.0\[dq])
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.69.1\[dq])
\f[R]
.fi
.SS Performance
@ -30761,7 +30843,7 @@ See the bisync filters section and generic
--filter-from (https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file)
documentation.
An example filters file contains filters for non-allowed files for
synching with Dropbox.
syncing with Dropbox.
.PP
If you make changes to your filters file then bisync requires a run with
\f[C]--resync\f[R].
@ -30987,7 +31069,7 @@ reduce the sync run times for very large numbers of files.
.PP
The check may be run manually with \f[C]--check-sync=only\f[R].
It runs only the integrity check and terminates without actually
synching.
syncing.
.PP
Note that currently, \f[C]--check-sync\f[R] \f[B]only checks listing
snapshots and NOT the actual files on the remotes.\f[R] Note also that
@ -31701,7 +31783,7 @@ flags are also supported.
.SS How to filter directories
.PP
Filtering portions of the directory tree is a critical feature for
synching.
syncing.
.PP
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync: - Directory trees containing
@ -31859,7 +31941,7 @@ This noise can be quashed by adding \f[C]--quiet\f[R] to the bisync
command line.
.SS Example exclude-style filters files for use with Dropbox
.IP \[bu] 2
Dropbox disallows synching the listed temporary and configuration/data
Dropbox disallows syncing the listed temporary and configuration/data
files.
The \[ga]- \[ga] filters exclude these files where ever they may occur
in the sync tree.
@ -32246,7 +32328,7 @@ single \f[C]-\f[R] or double dash.
.SS Running tests
.IP \[bu] 2
\f[C]go test . -case basic -remote local -remote2 local\f[R] runs the
\f[C]test_basic\f[R] test case using only the local filesystem, synching
\f[C]test_basic\f[R] test case using only the local filesystem, syncing
one local directory with another local directory.
Test script output is to the console, while commands within scenario.txt
have their output sent to the \f[C].../workdir/test.log\f[R] file, which
@ -32579,6 +32661,10 @@ Also note a number of academic publications by Benjamin
Pierce (http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization)
about \f[I]Unison\f[R] and synchronization in general.
.SS Changelog
.SS \f[C]v1.69.1\f[R]
.IP \[bu] 2
Fixed an issue causing listings to not capture concurrent modifications
under certain conditions
.SS \f[C]v1.68\f[R]
.IP \[bu] 2
Fixed an issue affecting backends that round modtimes to a lower
@ -34293,7 +34379,7 @@ It assumes that \f[C]USER_NAME\f[R] has been created.
The Resource entry must include both resource ARNs, as one implies the
bucket and the other implies the bucket\[aq]s objects.
.IP "3." 3
When using s3-no-check-bucket and the bucket already exsits, the
When using s3-no-check-bucket and the bucket already exists, the
\f[C]\[dq]arn:aws:s3:::BUCKET_NAME\[dq]\f[R] doesn\[aq]t have to be
included.
.PP
@ -38469,7 +38555,7 @@ location_constraint = au-nsw
.PP
Rclone can serve any remote over the S3 protocol.
For details see the rclone serve
s3 (https://rclone.org/commands/rclone_serve_http/) documentation.
s3 (https://rclone.org/commands/rclone_serve_s3/) documentation.
.PP
For example, to serve \f[C]remote:path\f[R] over s3, run the server like
this:
@ -38495,8 +38581,8 @@ use_multipart_uploads = false
\f[R]
.fi
.PP
Note that setting \f[C]disable_multipart_uploads = true\f[R] is to work
around a bug (https://rclone.org/commands/rclone_serve_http/#bugs) which
Note that setting \f[C]use_multipart_uploads = false\f[R] is to work
around a bug (https://rclone.org/commands/rclone_serve_s3/#bugs) which
will be fixed in due course.
.SS Scaleway
.PP
@ -39689,27 +39775,49 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Atlanta, GA (USA), us-southeast-1
1 / Amsterdam (Netherlands), nl-ams-1
\[rs] (nl-ams-1.linodeobjects.com)
2 / Atlanta, GA (USA), us-southeast-1
\[rs] (us-southeast-1.linodeobjects.com)
2 / Chicago, IL (USA), us-ord-1
3 / Chennai (India), in-maa-1
\[rs] (in-maa-1.linodeobjects.com)
4 / Chicago, IL (USA), us-ord-1
\[rs] (us-ord-1.linodeobjects.com)
3 / Frankfurt (Germany), eu-central-1
5 / Frankfurt (Germany), eu-central-1
\[rs] (eu-central-1.linodeobjects.com)
4 / Milan (Italy), it-mil-1
6 / Jakarta (Indonesia), id-cgk-1
\[rs] (id-cgk-1.linodeobjects.com)
7 / London 2 (Great Britain), gb-lon-1
\[rs] (gb-lon-1.linodeobjects.com)
8 / Los Angeles, CA (USA), us-lax-1
\[rs] (us-lax-1.linodeobjects.com)
9 / Madrid (Spain), es-mad-1
\[rs] (es-mad-1.linodeobjects.com)
10 / Melbourne (Australia), au-mel-1
\[rs] (au-mel-1.linodeobjects.com)
11 / Miami, FL (USA), us-mia-1
\[rs] (us-mia-1.linodeobjects.com)
12 / Milan (Italy), it-mil-1
\[rs] (it-mil-1.linodeobjects.com)
5 / Newark, NJ (USA), us-east-1
13 / Newark, NJ (USA), us-east-1
\[rs] (us-east-1.linodeobjects.com)
6 / Paris (France), fr-par-1
14 / Osaka (Japan), jp-osa-1
\[rs] (jp-osa-1.linodeobjects.com)
15 / Paris (France), fr-par-1
\[rs] (fr-par-1.linodeobjects.com)
7 / Seattle, WA (USA), us-sea-1
16 / S\[~a]o Paulo (Brazil), br-gru-1
\[rs] (br-gru-1.linodeobjects.com)
17 / Seattle, WA (USA), us-sea-1
\[rs] (us-sea-1.linodeobjects.com)
8 / Singapore ap-south-1
18 / Singapore, ap-south-1
\[rs] (ap-south-1.linodeobjects.com)
9 / Stockholm (Sweden), se-sto-1
19 / Singapore 2, sg-sin-1
\[rs] (sg-sin-1.linodeobjects.com)
20 / Stockholm (Sweden), se-sto-1
\[rs] (se-sto-1.linodeobjects.com)
10 / Washington, DC, (USA), us-iad-1
21 / Washington, DC, (USA), us-iad-1
\[rs] (us-iad-1.linodeobjects.com)
endpoint> 3
endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@ -45295,7 +45403,7 @@ The nonce is incremented for each chunk read making sure each nonce is
unique for each block written.
The chance of a nonce being reused is minuscule.
If you wrote an exabyte of data (10\[S1]\[u2078] bytes) you would have a
probability of approximately 2\[tmu]10\[u207B]\[S3]\[S2] of re-using a
probability of approximately 2\[tmu]10\[u207B]\[S3]\[S2] of reusing a
nonce.
.SS Chunk
.PP
@ -54838,7 +54946,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
[koofr]
[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@ -54854,6 +54962,28 @@ y/e/d> y
.SS Advanced Data Protection
.PP
ADP is currently unsupported and need to be disabled
.PP
On iPhone, Settings \f[C]>\f[R] Apple Account \f[C]>\f[R] iCloud
\f[C]>\f[R] \[aq]Access iCloud Data on the Web\[aq] must be ON, and
\[aq]Advanced Data Protection\[aq] OFF.
.SS Troubleshooting
.SS Missing PCS cookies from the request
.PP
This means you have Advanced Data Protection (ADP) turned on.
This is not supported at the moment.
If you want to use rclone you will have to turn it off.
See above for how to turn it off.
.PP
You will need to clear the \f[C]cookies\f[R] and the
\f[C]trust_token\f[R] fields in the config.
Or you can delete the remote config and start again.
.PP
You should then run \f[C]rclone reconnect remote:\f[R].
.PP
Note that changing the ADP setting may not take effect immediately - you
may need to wait a few hours or a day before you can get rclone to work
- keep clearing the config entry and running
\f[C]rclone reconnect remote:\f[R] until rclone functions properly.
.SS Standard options
.PP
Here are the Standard options specific to iclouddrive (iCloud Drive).
@ -60946,7 +61076,7 @@ Microsoft Cloud for US Government
\[dq]de\[dq]
.RS 2
.IP \[bu] 2
Microsoft Cloud Germany
Microsoft Cloud Germany (deprecated - try global region first).
.RE
.IP \[bu] 2
\[dq]cn\[dq]
@ -61951,6 +62081,43 @@ T}
.TE
.PP
See the metadata (https://rclone.org/docs/#metadata) docs for more info.
.SS Impersonate other users as Admin
.PP
Unlike Google Drive and impersonating any domain user via service
accounts, OneDrive requires you to authenticate as an admin account, and
manually setup a remote per user you wish to impersonate.
.IP "1." 3
In Microsoft 365 Admin Center (https://admin.microsoft.com), open each
user you need to \[dq]impersonate\[dq] and go to the OneDrive section.
There is a heading called \[dq]Get access to files\[dq], you need to
click to create the link, this creates the link of the format
\f[C]https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/\f[R]
but also changes the permissions so you your admin user has access.
.IP "2." 3
Then in powershell run the following commands:
.IP
.nf
\f[C]
Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
Import-Module Microsoft.Graph.Files
Connect-MgGraph -Scopes \[dq]Files.ReadWrite.All\[dq]
# Follow the steps to allow access to your admin user
# Then run this for each user you want to impersonate to get the Drive ID
Get-MgUserDefaultDrive -UserId \[aq]{emailaddress}\[aq]
# This will give you output of the format:
# Name Id DriveType CreatedDateTime
# ---- -- --------- ---------------
# OneDrive b!XYZ123 business 14/10/2023 1:00:58\[u202F]pm
\f[R]
.fi
.IP "3." 3
Then in rclone add a onedrive remote type, and use the
\f[C]Type in driveID\f[R] with the DriveID you got in the previous step.
One remote per user.
It will then confirm the drive ID, and hopefully give you a message of
\f[C]Found drive \[dq]root\[dq] of type \[dq]business\[dq]\f[R] and then
include the URL of the format
\f[C]https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents\f[R]
.SS Limitations
.PP
If you don\[aq]t use rclone for 90 days the refresh token will expire.
@ -74872,6 +75039,67 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
.SS v1.69.1 - 2025-02-14
.PP
See commits (https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
.IP \[bu] 2
bisync: Fix listings missing concurrent modifications (nielash)
.IP \[bu] 2
serve s3: Fix list objects encoding-type (Nick Craig-Wood)
.IP \[bu] 2
fs: Fix confusing \[dq]didn\[aq]t find section in config file\[dq] error
(Nick Craig-Wood)
.IP \[bu] 2
doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick
Craig-Wood, Tim White, Zachary Vorhies)
.IP \[bu] 2
build: Added parallel docker builds and caching for go build in the
container (Anagh Kumar Baranwal)
.RE
.IP \[bu] 2
VFS
.RS 2
.IP \[bu] 2
Fix the cache failing to upload symlinks when \f[C]--links\f[R] was
specified (Nick Craig-Wood)
.IP \[bu] 2
Fix race detected by race detector (Nick Craig-Wood)
.IP \[bu] 2
Close the change notify channel on Shutdown (izouxv)
.RE
.IP \[bu] 2
B2
.RS 2
.IP \[bu] 2
Fix \[dq]fatal error: concurrent map writes\[dq] (Nick Craig-Wood)
.RE
.IP \[bu] 2
Iclouddrive
.RS 2
.IP \[bu] 2
Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
.RE
.IP \[bu] 2
Onedrive
.RS 2
.IP \[bu] 2
Mark German (de) region as deprecated (Nick Craig-Wood)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Added new storage class to magalu provider (Bruno Fernandes)
.IP \[bu] 2
Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
.IP \[bu] 2
Add latest Linode Object Storage endpoints (jbagwell-akamai)
.RE
.SS v1.69.0 - 2025-01-12
.PP
See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)