{
"CLI_VERSION": "5.25",
"VERSION": "1",
"capsule": "",
"commands": {
"acl": {
"capsule": "Get, set, or change bucket and/or object ACLs",
"commands": {
"ch": {
"capsule": "Get, set, or change bucket and/or object ACLs",
"commands": {},
"flags": {
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Remove all roles associated with the matching entity.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Normally gsutil stops at the first error. The -f option causes\n to continue when it encounters errors. With this option the\nutil exit status will be 0 even if some ACLs couldn't be\nanged.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-g": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a group entity's role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-g",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a project viewers/editors/owners role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "Performs acl ch request recursively, to all objects under the\necified URL.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a user entity's role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"acl",
"ch"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"acl ch\" (or \"acl change\") command updates access control lists, similar\nin spirit to the Linux chmod command. You can specify multiple access grant\nadditions and deletions in a single command run; all changes will be made\natomically to each object in turn. For example, if the command requests\ndeleting one grant and adding a different grant, the ACLs being updated will\nnever be left in an intermediate state where one grant has been deleted but\nthe second grant not yet added. Each change specifies a user or group grant\nto add or delete, and for grant additions, one of R, W, O (for the\npermission to be granted). A more formal description is provided in a later\nsection; below we provide examples.",
"ENTITIES": "There are four different entity types: Users, Groups, All Authenticated Users,\nand All Users.\n\nUsers are added with -u and a plain ID or email address, as in\n\"-u john-doe@gmail.com:r\". Note: Service Accounts are considered to be users.\n\nGroups are like users, but specified with the -g flag, as in\n\"-g power-users@example.com:O\". Groups may also be specified as a full\ndomain, as in \"-g my-company.com:r\".\n\nallAuthenticatedUsers and allUsers are specified directly, as\nin \"-g allUsers:R\" or \"-g allAuthenticatedUsers:O\". These are case\ninsensitive, and may be shortened to \"all\" and \"allauth\", respectively.\n\nRemoving roles is specified with the -d flag and an ID, email\naddress, domain, or one of allUsers or allAuthenticatedUsers.\n\nMany entities' roles can be specified on the same command line, allowing\nbundled changes to be executed in a single run. This will reduce the number of\nrequests made to the server.",
"EXAMPLES": "Examples for \"ch\" sub-command:\n\nGrant anyone on the internet READ access to the object example-object:\n\n gsutil acl ch -u allUsers:R gs://example-bucket/example-object\n\nNOTE: By default, publicly readable objects are served with a Cache-Control\nheader allowing such objects to be cached for 3600 seconds. If you need to\nensure that updates become visible immediately, you should set a\nCache-Control header of \"Cache-Control:private, max-age=0, no-transform\" on\nsuch objects. For help doing this, see \"gsutil help setmeta\".\n\nGrant the user john.doe@example.com READ access to all objects\nin example-bucket that begin with folder/:\n\n gsutil acl ch -r -u john.doe@example.com:R gs://example-bucket/folder/\n\nGrant the group admins@example.com OWNER access to all jpg files in\nexample-bucket:\n\n gsutil acl ch -g admins@example.com:O gs://example-bucket/**.jpg\n\nGrant the owners of project example-project WRITE access to the bucket\nexample-bucket:\n\n gsutil acl ch -p owners-example-project:W gs://example-bucket\n\nNOTE: You can replace 'owners' with 'viewers' or 'editors' to grant access\nto a project's viewers/editors respectively.\n\nRemove access to the bucket example-bucket for the viewers of project number\n12345:\n\n gsutil acl ch -d viewers-12345 gs://example-bucket\n\nNOTE: You cannot remove the project owners group from ACLs of gs:// buckets in\nthe given project. Attempts to do so will appear to succeed, but the service\nwill add the project owners group into the new set of ACLs before applying it.\n\nNote that removing a project requires you to reference the project by\nits number (which you can see with the acl get command) as opposed to its\nproject ID string.\n\nGrant the service account foo@developer.gserviceaccount.com WRITE access to\nthe bucket example-bucket:\n\n gsutil acl ch -u foo@developer.gserviceaccount.com:W gs://example-bucket\n\nGrant all users from the `G Suite\n`_ domain my-domain.org READ\naccess to the bucket gcs.my-domain.org:\n\n gsutil acl ch -g my-domain.org:R gs://gcs.my-domain.org\n\nRemove any current access by john.doe@example.com from the bucket\nexample-bucket:\n\n gsutil acl ch -d john.doe@example.com gs://example-bucket\n\nIf you have a large number of objects to update, enabling multi-threading\nwith the gsutil -m flag can significantly improve performance. The\nfollowing command adds OWNER for admin@example.org using\nmulti-threading:\n\n gsutil -m acl ch -r -u admin@example.org:O gs://example-bucket\n\nGrant READ access to everyone from my-domain.org and to all authenticated\nusers, and grant OWNER to admin@mydomain.org, for the buckets\nmy-bucket and my-other-bucket, with multi-threading enabled:\n\n gsutil -m acl ch -r -g my-domain.org:R -g AllAuth:R \\\n -u admin@mydomain.org:O gs://my-bucket/ gs://my-other-bucket",
"ROLES": "You may specify the following roles with either their shorthand or\ntheir full name:\n\n R: READ\n W: WRITE\n O: OWNER\n\nFor more information on these roles and the access they grant, see the\npermissions section of the `Access Control Lists page\n`_."
}
},
"get": {
"capsule": "Get, set, or change bucket and/or object ACLs",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"acl",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"acl get\" command gets the ACL text for a bucket or object, which you can\nsave and edit for the acl set command."
}
},
"set": {
"capsule": "Get, set, or change bucket and/or object ACLs",
"commands": {},
"flags": {
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Performs \"acl set\" request on all object versions.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Normally gsutil stops at the first error. The -f option causes\n to continue when it encounters errors. If some of the ACLs\nuldn't be set, gsutil's exit status will be non-zero even if\nis flag is set. This option is implicitly set when running\nsutil -m acl...\".",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "Performs \"acl set\" request recursively, to all objects under\ne specified URL.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"acl",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"acl set\" command allows you to set an Access Control List on one or\nmore buckets and objects. The file-or-canned_acl_name parameter names either\na canned ACL or the path to a file that contains ACL text. The simplest way\nto use the \"acl set\" command is to specify one of the canned ACLs, e.g.,:\n\n gsutil acl set private gs://bucket\n\nIf you want to make an object or bucket publicly readable or writable, it is\nrecommended to use \"acl ch\", to avoid accidentally removing OWNER permissions.\nSee the \"acl ch\" section for details.\n\nSee `Predefined ACLs\n`_\nfor a list of canned ACLs.\n\nIf you want to define more fine-grained control over your data, you can\nretrieve an ACL using the \"acl get\" command, save the output to a file, edit\nthe file, and then use the \"acl set\" command to set that ACL on the buckets\nand/or objects. For example:\n\n gsutil acl get gs://bucket/file.txt > acl.txt\n\nMake changes to acl.txt such as adding an additional grant, then:\n\n gsutil acl set acl.txt gs://cats/file.txt\n\nNote that you can set an ACL on multiple buckets or objects at once. For\nexample, to set ACLs on all .jpg files found in a bucket:\n\n gsutil acl set acl.txt gs://bucket/**.jpg\n\nIf you have a large number of ACLs to update you might want to use the\ngsutil -m option, to perform a parallel (multi-threaded/multi-processing)\nupdate:\n\n gsutil -m acl set acl.txt gs://bucket/**.jpg\n\nNote that multi-threading/multi-processing is only done when the named URLs\nrefer to objects, which happens either if you name specific objects or\nif you enumerate objects by using an object wildcard or specifying\nthe acl -r flag."
}
}
},
"flags": {
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Performs \"acl set\" request on all object versions.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Remove all roles associated with the matching entity.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Normally gsutil stops at the first error. The -f option causes\n to continue when it encounters errors. With this option the\nutil exit status will be 0 even if some ACLs couldn't be\nanged.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-g": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a group entity's role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-g",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a project viewers/editors/owners role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "Performs acl ch request recursively, to all objects under the\necified URL.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a user entity's role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"acl"
],
"positionals": [],
"release": "GA",
"sections": {
"CH": "The \"acl ch\" (or \"acl change\") command updates access control lists, similar\nin spirit to the Linux chmod command. You can specify multiple access grant\nadditions and deletions in a single command run; all changes will be made\natomically to each object in turn. For example, if the command requests\ndeleting one grant and adding a different grant, the ACLs being updated will\nnever be left in an intermediate state where one grant has been deleted but\nthe second grant not yet added. Each change specifies a user or group grant\nto add or delete, and for grant additions, one of R, W, O (for the\npermission to be granted). A more formal description is provided in a later\nsection; below we provide examples.",
"DESCRIPTION": "The acl command has three sub-commands:",
"ENTITIES": "There are four different entity types: Users, Groups, All Authenticated Users,\nand All Users.\n\nUsers are added with -u and a plain ID or email address, as in\n\"-u john-doe@gmail.com:r\". Note: Service Accounts are considered to be users.\n\nGroups are like users, but specified with the -g flag, as in\n\"-g power-users@example.com:O\". Groups may also be specified as a full\ndomain, as in \"-g my-company.com:r\".\n\nallAuthenticatedUsers and allUsers are specified directly, as\nin \"-g allUsers:R\" or \"-g allAuthenticatedUsers:O\". These are case\ninsensitive, and may be shortened to \"all\" and \"allauth\", respectively.\n\nRemoving roles is specified with the -d flag and an ID, email\naddress, domain, or one of allUsers or allAuthenticatedUsers.\n\nMany entities' roles can be specified on the same command line, allowing\nbundled changes to be executed in a single run. This will reduce the number of\nrequests made to the server.",
"EXAMPLES": "Examples for \"ch\" sub-command:\n\nGrant anyone on the internet READ access to the object example-object:\n\n gsutil acl ch -u allUsers:R gs://example-bucket/example-object\n\nNOTE: By default, publicly readable objects are served with a Cache-Control\nheader allowing such objects to be cached for 3600 seconds. If you need to\nensure that updates become visible immediately, you should set a\nCache-Control header of \"Cache-Control:private, max-age=0, no-transform\" on\nsuch objects. For help doing this, see \"gsutil help setmeta\".\n\nGrant the user john.doe@example.com READ access to all objects\nin example-bucket that begin with folder/:\n\n gsutil acl ch -r -u john.doe@example.com:R gs://example-bucket/folder/\n\nGrant the group admins@example.com OWNER access to all jpg files in\nexample-bucket:\n\n gsutil acl ch -g admins@example.com:O gs://example-bucket/**.jpg\n\nGrant the owners of project example-project WRITE access to the bucket\nexample-bucket:\n\n gsutil acl ch -p owners-example-project:W gs://example-bucket\n\nNOTE: You can replace 'owners' with 'viewers' or 'editors' to grant access\nto a project's viewers/editors respectively.\n\nRemove access to the bucket example-bucket for the viewers of project number\n12345:\n\n gsutil acl ch -d viewers-12345 gs://example-bucket\n\nNOTE: You cannot remove the project owners group from ACLs of gs:// buckets in\nthe given project. Attempts to do so will appear to succeed, but the service\nwill add the project owners group into the new set of ACLs before applying it.\n\nNote that removing a project requires you to reference the project by\nits number (which you can see with the acl get command) as opposed to its\nproject ID string.\n\nGrant the service account foo@developer.gserviceaccount.com WRITE access to\nthe bucket example-bucket:\n\n gsutil acl ch -u foo@developer.gserviceaccount.com:W gs://example-bucket\n\nGrant all users from the `G Suite\n`_ domain my-domain.org READ\naccess to the bucket gcs.my-domain.org:\n\n gsutil acl ch -g my-domain.org:R gs://gcs.my-domain.org\n\nRemove any current access by john.doe@example.com from the bucket\nexample-bucket:\n\n gsutil acl ch -d john.doe@example.com gs://example-bucket\n\nIf you have a large number of objects to update, enabling multi-threading\nwith the gsutil -m flag can significantly improve performance. The\nfollowing command adds OWNER for admin@example.org using\nmulti-threading:\n\n gsutil -m acl ch -r -u admin@example.org:O gs://example-bucket\n\nGrant READ access to everyone from my-domain.org and to all authenticated\nusers, and grant OWNER to admin@mydomain.org, for the buckets\nmy-bucket and my-other-bucket, with multi-threading enabled:\n\n gsutil -m acl ch -r -g my-domain.org:R -g AllAuth:R \\\n -u admin@mydomain.org:O gs://my-bucket/ gs://my-other-bucket",
"GET": "The \"acl get\" command gets the ACL text for a bucket or object, which you can\nsave and edit for the acl set command.",
"ROLES": "You may specify the following roles with either their shorthand or\ntheir full name:\n\n R: READ\n W: WRITE\n O: OWNER\n\nFor more information on these roles and the access they grant, see the\npermissions section of the `Access Control Lists page\n`_.",
"SET": "The \"acl set\" command allows you to set an Access Control List on one or\nmore buckets and objects. The file-or-canned_acl_name parameter names either\na canned ACL or the path to a file that contains ACL text. The simplest way\nto use the \"acl set\" command is to specify one of the canned ACLs, e.g.,:\n\n gsutil acl set private gs://bucket\n\nIf you want to make an object or bucket publicly readable or writable, it is\nrecommended to use \"acl ch\", to avoid accidentally removing OWNER permissions.\nSee the \"acl ch\" section for details.\n\nSee `Predefined ACLs\n`_\nfor a list of canned ACLs.\n\nIf you want to define more fine-grained control over your data, you can\nretrieve an ACL using the \"acl get\" command, save the output to a file, edit\nthe file, and then use the \"acl set\" command to set that ACL on the buckets\nand/or objects. For example:\n\n gsutil acl get gs://bucket/file.txt > acl.txt\n\nMake changes to acl.txt such as adding an additional grant, then:\n\n gsutil acl set acl.txt gs://cats/file.txt\n\nNote that you can set an ACL on multiple buckets or objects at once. For\nexample, to set ACLs on all .jpg files found in a bucket:\n\n gsutil acl set acl.txt gs://bucket/**.jpg\n\nIf you have a large number of ACLs to update you might want to use the\ngsutil -m option, to perform a parallel (multi-threaded/multi-processing)\nupdate:\n\n gsutil -m acl set acl.txt gs://bucket/**.jpg\n\nNote that multi-threading/multi-processing is only done when the named URLs\nrefer to objects, which happens either if you name specific objects or\nif you enumerate objects by using an object wildcard or specifying\nthe acl -r flag."
}
},
"autoclass": {
"capsule": "Configure Autoclass feature",
"commands": {
"get": {
"capsule": "Configure Autoclass feature",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"autoclass",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``get`` sub-command gets the current Autoclass configuration for a\nbucket. The returned configuration has the following fields:\n\n``enabled``: a boolean field indicating whether the feature is on or off.\n\n``toggleTime``: a timestamp indicating when the enabled field was set."
}
},
"set": {
"capsule": "Configure Autoclass feature",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"autoclass",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``set`` sub-command requires an additional sub-command, either ``on``\nor ``off``, which enables or disables Autoclass for the specified\nbucket(s)."
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"autoclass"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The `Autoclass `_\nfeature automatically selects the best storage class for objects based\non access patterns. This command has two sub-commands, ``get`` and\n``set``.",
"GET": "The ``get`` sub-command gets the current Autoclass configuration for a\nbucket. The returned configuration has the following fields:\n\n``enabled``: a boolean field indicating whether the feature is on or off.\n\n``toggleTime``: a timestamp indicating when the enabled field was set.",
"SET": "The ``set`` sub-command requires an additional sub-command, either ``on``\nor ``off``, which enables or disables Autoclass for the specified\nbucket(s)."
}
},
"bucketpolicyonly": {
"capsule": "Configure uniform bucket-level access",
"commands": {
"get": {
"capsule": "Configure uniform bucket-level access",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"bucketpolicyonly",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``bucketpolicyonly get`` command shows whether uniform bucket-level\naccess is enabled for the specified Cloud Storage bucket.",
"EXAMPLES": "Check if your buckets are using uniform bucket-level access:\n\n gsutil bucketpolicyonly get gs://redbucket gs://bluebucket"
}
},
"set": {
"capsule": "Configure uniform bucket-level access",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"bucketpolicyonly",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``bucketpolicyonly set`` command enables or disables the uniform bucket-level\naccess feature on Google Cloud Storage buckets.",
"EXAMPLES": "Configure your buckets to use uniform bucket-level access:\n\n gsutil bucketpolicyonly set on gs://redbucket gs://bluebucket\n\nConfigure your buckets to NOT use uniform bucket-level access:\n\n gsutil bucketpolicyonly set off gs://redbucket gs://bluebucket"
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"bucketpolicyonly"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The Bucket Policy Only feature is now known as `uniform bucket-level access\n`_.\nThe ``bucketpolicyonly`` command is still supported, but we recommend using\nthe equivalent ``ubla`` `command\n`_.\n\nThe ``bucketpolicyonly`` command is used to retrieve or configure the\nuniform bucket-level access setting of Cloud Storage buckets. This command has\ntwo sub-commands, ``get`` and ``set``.",
"EXAMPLES": "Configure your buckets to use uniform bucket-level access:\n\n gsutil bucketpolicyonly set on gs://redbucket gs://bluebucket\n\nConfigure your buckets to NOT use uniform bucket-level access:\n\n gsutil bucketpolicyonly set off gs://redbucket gs://bluebucket",
"GET": "The ``bucketpolicyonly get`` command shows whether uniform bucket-level\naccess is enabled for the specified Cloud Storage bucket.",
"SET": "The ``bucketpolicyonly set`` command enables or disables the uniform bucket-level\naccess feature on Google Cloud Storage buckets."
}
},
"cat": {
"capsule": "Concatenate object content to stdout",
"commands": {},
"flags": {
"-h": {
"attr": {},
"category": "",
"default": "",
"description": "Prints short header for each object. For example:\ngsutil cat -h gs://bucket/meeting_notes/2012_Feb/*.txt\nis would print a header with the object name before the contents\n each text object that matched the wildcard.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-h",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "range Causes gsutil to output just the specified byte range of the\nject. Ranges can be of these forms:\nstart-end (e.g., -r 256-5939)\nstart- (e.g., -r 256-)\n-numbytes (e.g., -r -5)\nere offsets start at 0, start-end means to return bytes start\nrough end (inclusive), start- means to return bytes start\nrough the end of the object, and -numbytes means to return the\nst numbytes of the object. For example:\ngsutil cat -r 256-939 gs://bucket/object\nturns bytes 256 through 939, while:\ngsutil cat -r -5 gs://bucket/object\nturns the final 5 bytes of the object.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"cat"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The cat command outputs the contents of one or more URLs to stdout.\nWhile the cat command does not compute a checksum, it is otherwise\nequivalent to doing:\n\n gsutil cp url... -\n\n(The final '-' causes gsutil to stream the output to stdout.)\n\nWARNING: The gsutil cat command does not compute a checksum of the\ndownloaded data. Therefore, we recommend that users either perform\ntheir own validation of the output of gsutil cat or use gsutil cp\nor rsync (both of which perform integrity checking automatically)."
}
},
"compose": {
"capsule": "Concatenate a sequence of objects into a new composite object.",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"compose"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The compose command creates a new object whose content is the concatenation\nof a given sequence of source objects under the same bucket. gsutil uses\nthe content type of the first source object to determine the destination\nobject's content type and does not modify or delete the source objects as\npart of the compose command. For more information, see the `composite objects\ntopic `_.\n\nThere is a limit (currently 32) to the number of components that can\nbe composed in a single operation."
}
},
"config": {
"capsule": "Obtain credentials and create configuration file",
"commands": {},
"flags": {
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Prompt for Google Cloud Storage access key and secret (the older\nthentication method before OAuth2 was supported) instead of\ntaining an OAuth2 token.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-e": {
"attr": {},
"category": "",
"default": "",
"description": "Prompt for service account credentials. This option requires that\n-a`` is not set.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-n": {
"attr": {},
"category": "",
"default": "",
"description": "Write the configuration file without authentication configured.\nis flag is mutually exclusive with all flags other than ``-o``.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-n",
"nargs": "0",
"type": "bool",
"value": ""
},
"-o": {
"attr": {},
"category": "",
"default": "",
"description": " Write the configuration to instead of ~/.boto.\ne ``-`` for stdout.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-o",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"config"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``gsutil config`` command generally applies to users who have legacy\nstandalone installations of gsutil. If you installed gsutil via the Cloud SDK,\n``gsutil config`` fails unless you are specifically using the ``-a`` flag or\nhave configured gcloud to not pass its managed credentials to gsutil (via the\ncommand ``gcloud config set pass_credentials_to_gsutil false``). For all other\nuse cases, Cloud SDK users should use the ``gcloud auth`` group of commands\ninstead, which configures OAuth2 credentials that gcloud implicitly passes to\ngsutil at runtime. To check if you are using gsutil from the Cloud SDK or as a\nlegacy standalone, use ``gsutil version -l`` and in the output look for\n\"using cloud sdk\".\n\nImportant: The default behavior for the ``gsutil config`` command is to obtain\nuser account credentials for authentication. However, user account credentials\nare no longer supported for standalone gsutil. For this reason, running the\ndefault ``gsutil config`` command fails, and using any of the following flags\ncauses the command to fail: ``-b``, ``-f``, ``-r``, ``--reauth``, ``-s``,\n``-w``. When using standalone gsutil, it's recommended that you use\nservice account credentials via the ``-e`` flag.\n\nThe ``gsutil config`` command obtains access credentials for Cloud Storage and\nwrites a `boto/gsutil configuration file\n`_ containing\nthe obtained credentials along with a number of other configuration-\ncontrollable values.\n\nUnless specified otherwise (see OPTIONS), the configuration file is written\nto ~/.boto (i.e., the file .boto under the user's home directory). If the\ndefault file already exists, an attempt is made to rename the existing file\nto ~/.boto.bak; if that attempt fails the command exits. A different\ndestination file can be specified with the ``-o`` option (see OPTIONS).\n\nBecause the boto configuration file contains your credentials you should\nkeep its file permissions set so no one but you has read access. (The file\nis created read-only when you run ``gsutil config``.)",
"SERVICE ACCOUNT CREDENTIALS": "Service accounts are useful for authenticating on behalf of a service or\napplication (as opposed to a user). If you use gsutil as a legacy\nstand-alone tool, you configure credentials for service accounts using the\n``-e`` option:\n\n gsutil config -e\n\nNote that if you use gsutil through the Cloud SDK, you instead activate your\nservice account via the `gcloud auth activate-service-account\n`_\ncommand.\n\nWhen you run ``gsutil config -e``, you are prompted for the path to your\nprivate key file and, if not using a JSON key file, your service account\nemail address and key file password. To get this data, follow the instructions\non `Service Accounts `_.\nUsing this information, gsutil populates the \"gs_service_key_file\" attribute\nin the boto configuration file. If not using a JSON key file, gsutil also\npopulates the \"gs_service_client_id\" and \"gs_service_key_file_password\"\nattributes.\n\nNote that your service account is NOT considered an Owner for the purposes of\nAPI access (see \"gsutil help creds\" for more information about this). See\nhttps://developers.google.com/identity/protocols/OAuth2ServiceAccount for\nfurther information on service account authentication.\n\nIf you want to use credentials based on access key and secret (the older\nauthentication method before OAuth2 was supported), see the ``-a`` option in\nthe OPTIONS section.\n\nIf you wish to use gsutil with other providers (or to copy data back and\nforth between multiple providers) you can edit their credentials into the\n[Credentials] section after creating the initial boto configuration file."
}
},
"cors": {
"capsule": "Get or set a CORS configuration for one or more buckets",
"commands": {
"get": {
"capsule": "Get or set a CORS configuration for one or more buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"cors",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Gets the CORS configuration for a single bucket. The output from\n``cors get`` can be redirected into a file, edited and then updated using\n``cors set``."
}
},
"set": {
"capsule": "Get or set a CORS configuration for one or more buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"cors",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Sets the CORS configuration for one or more buckets. The ``cors-json-file``\nspecified on the command line should be a path to a local file containing\na JSON-formatted CORS configuration, such as the example described above."
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"cors"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Gets or sets the Cross-Origin Resource Sharing (CORS) configuration on one or\nmore buckets. This command is supported for buckets only, not objects. An\nexample CORS JSON file looks like the following:\n\n [\n {\n \"origin\": [\"http://origin1.example.com\"],\n \"responseHeader\": [\"Content-Type\"],\n \"method\": [\"GET\"],\n \"maxAgeSeconds\": 3600\n }\n ]\n\nThe above CORS configuration explicitly allows cross-origin GET requests from\nhttp://origin1.example.com and may include the Content-Type response header.\nThe preflight request may be cached for 1 hour.\n\nNote that requests to the authenticated browser download endpoint ``storage.cloud.google.com``\ndo not allow CORS requests. For more information about supported endpoints for CORS, see\n`Cloud Storage CORS support `_.\n\nThe following (empty) CORS JSON file removes any CORS configuration for a\nbucket:\n\n []\n\nThe cors command has two sub-commands:",
"GET": "Gets the CORS configuration for a single bucket. The output from\n``cors get`` can be redirected into a file, edited and then updated using\n``cors set``.",
"SET": "Sets the CORS configuration for one or more buckets. The ``cors-json-file``\nspecified on the command line should be a path to a local file containing\na JSON-formatted CORS configuration, such as the example described above."
}
},
"cp": {
"capsule": "Copy files and objects",
"commands": {},
"flags": {
"--stet": {
"attr": {},
"category": "",
"default": "",
"description": "If the STET binary can be found in boto or PATH, cp will\n use the split-trust encryption tool for end-to-end encryption.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "--stet",
"nargs": "0",
"type": "bool",
"value": ""
},
"-A": {
"attr": {},
"category": "",
"default": "",
"description": "Copy all source versions from a source bucket or folder.\n If not set, only the live version of each source object is\n copied.\n NOTE: This option is only useful when the destination\n bucket has Object Versioning enabled. Additionally, the generation\n numbers of copied versions do not necessarily match the order of the\n original generation numbers.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-A",
"nargs": "0",
"type": "bool",
"value": ""
},
"-D": {
"attr": {},
"category": "",
"default": "",
"description": "Copy in \"daisy chain\" mode, which means copying between two buckets\n by first downloading to the machine where gsutil is run, then\n uploading to the destination bucket. The default mode is a\n \"copy in the cloud,\" where data is copied between two buckets without\n uploading or downloading.\n During a \"copy in the cloud,\" a source composite object remains composite\n at its destination. However, you can use \"daisy chain\" mode to change a\n composite object into a non-composite object. For example:\n gsutil cp -D gs://bucket/obj gs://bucket/obj_tmp\n gsutil mv gs://bucket/obj_tmp gs://bucket/obj\n NOTE: \"Daisy chain\" mode is automatically used when copying\n between providers: for example, when copying data from Cloud Storage\n to another provider.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-D",
"nargs": "0",
"type": "bool",
"value": ""
},
"-I": {
"attr": {},
"category": "",
"default": "",
"description": "Use ``stdin`` to specify a list of files or objects to copy. You can use\n gsutil in a pipeline to upload or download objects as generated by a program.\n For example:\n cat filelist | gsutil -m cp -I gs://my-bucket\n where the output of ``cat filelist`` is a one-per-line list of\n files, cloud URLs, and wildcards of files and cloud URLs.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-I",
"nargs": "0",
"type": "bool",
"value": ""
},
"-J": {
"attr": {},
"category": "",
"default": "",
"description": "Applies gzip transport encoding to file uploads. This option\n works like the ``-j`` option described above, but it applies to\n all uploaded files, regardless of extension.\n CAUTION: If some of the source files don't compress well, such\n as binary data, using this option may result in longer uploads.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-J",
"nargs": "0",
"type": "bool",
"value": ""
},
"-L": {
"attr": {},
"category": "",
"default": "",
"description": " Outputs a manifest log file with detailed information about\n each item that was copied. This manifest contains the following\n information for each item:\n - Source path.\n - Destination path.\n - Source size.\n - Bytes transferred.\n - MD5 hash.\n - Transfer start time and date in UTC and ISO 8601 format.\n - Transfer completion time and date in UTC and ISO 8601 format.\n - Upload id, if a resumable upload was performed.\n - Final result of the attempted transfer, either success or failure.\n - Failure details, if any.\n If the log file already exists, gsutil uses the file as an\n input to the copy process, and appends log items to\n the existing file. Objects that are marked in the\n existing log file as having been successfully copied or\n skipped are ignored. Objects without entries are\n copied and ones previously marked as unsuccessful are\n retried. This option can be used in conjunction with the ``-c`` option to\n build a script that copies a large number of objects reliably,\n using a bash script like the following:\n until gsutil cp -c -L cp.log -r ./dir gs://bucket; do\n sleep 1\n done\n The -c option enables copying to continue after failures\n occur, and the -L option allows gsutil to pick up where it\n left off without duplicating work. The loop continues\n running as long as gsutil exits with a non-zero status. A non-zero\n status indicates there was at least one failure during the copy\n operation.\n NOTE: If you are synchronizing the contents of a\n directory and a bucket, or the contents of two buckets, see\n \"gsutil help rsync\".",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-L",
"nargs": "0",
"type": "bool",
"value": ""
},
"-P": {
"attr": {},
"category": "",
"default": "",
"description": "Enables POSIX attributes to be preserved when objects are\n copied. ``gsutil cp`` copies fields provided by ``stat``. These fields\n are the user ID of the owner, the group\n ID of the owning group, the mode or permissions of the file, and\n the access and modification time of the file. For downloads, these\n attributes are only set if the source objects were uploaded\n with this flag enabled.\n On Windows, this flag only sets and restores access time and\n modification time. This is because Windows doesn't support\n POSIX uid/gid/mode.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-P",
"nargs": "0",
"type": "bool",
"value": ""
},
"-U": {
"attr": {},
"category": "",
"default": "",
"description": "Skips objects with unsupported object types instead of failing.\n Unsupported object types include Amazon S3 objects in the GLACIER\n storage class.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-U",
"nargs": "0",
"type": "bool",
"value": ""
},
"-Z": {
"attr": {},
"category": "",
"default": "",
"description": "Applies gzip content-encoding to file uploads. This option\n works like the ``-z`` option described above, but it applies to\n all uploaded files, regardless of extension.\n CAUTION: If some of the source files don't compress well, such\n as binary data, using this option may result in files taking up\n more space in the cloud than they would if left uncompressed.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-Z",
"nargs": "0",
"type": "bool",
"value": ""
},
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "canned_acl Applies the specific ``canned_acl`` to uploaded objects. See\n \"gsutil help acls\" for further details.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-c": {
"attr": {},
"category": "",
"default": "",
"description": "If an error occurs, continue attempting to copy the remaining\n files. If any copies are unsuccessful, gsutil's exit status\n is non-zero, even if this flag is set. This option is\n implicitly set when running ``gsutil -m cp...``.\n NOTE: ``-c`` only applies to the actual copying operation. If an\n error, such as ``invalid Unicode file name``, occurs while iterating\n over the files in the local directory, gsutil prints an error\n message and aborts.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-c",
"nargs": "0",
"type": "bool",
"value": ""
},
"-e": {
"attr": {},
"category": "",
"default": "",
"description": "Exclude symlinks. When specified, symbolic links are not copied.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-j": {
"attr": {},
"category": "",
"default": "",
"description": " Applies gzip transport encoding to any file upload whose\n extension matches the ``-j`` extension list. This is useful when\n uploading files with compressible content such as .js, .css,\n or .html files. This also saves network bandwidth while\n leaving the data uncompressed in Cloud Storage.\n When you specify the ``-j`` option, files being uploaded are\n compressed in-memory and on-the-wire only. Both the local\n files and Cloud Storage objects remain uncompressed. The\n uploaded objects retain the ``Content-Type`` and name of the\n original files.\n Note that if you want to use the ``-m`` `top-level option\n `_\n to parallelize copies along with the ``-j/-J`` options, your\n performance may be bottlenecked by the\n \"max_upload_compression_buffer_size\" boto config option,\n which is set to 2 GiB by default. You can change this\n compression buffer size to a higher limit. For example:\n gsutil -o \"GSUtil:max_upload_compression_buffer_size=8G\" \\\n -m cp -j html,txt -r /local/source/dir gs://bucket/path",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-j",
"nargs": "0",
"type": "bool",
"value": ""
},
"-n": {
"attr": {},
"category": "",
"default": "",
"description": "No-clobber. When specified, existing files or objects at the\n destination are not replaced. Any items that are skipped\n by this option are reported as skipped. gsutil\n performs an additional GET request to check if an item\n exists before attempting to upload the data. This saves gsutil\n from retransmitting data, but the additional HTTP requests may make\n small object transfers slower and more expensive.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-n",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Preserves ACLs when copying in the cloud. Note\n that this option has performance and cost implications only when\n using the XML API, as the XML API requires separate HTTP calls for\n interacting with ACLs. You can mitigate this\n performance issue using ``gsutil -m cp`` to perform parallel\n copying. Note that this option only works if you have OWNER access\n to all objects that are copied. If you want all objects in the\n destination bucket to end up with the same ACL, you can avoid these\n performance issues by setting a default object ACL on that bucket\n instead of using ``cp -p``. See \"gsutil help defacl\".\n Note that it's not valid to specify both the ``-a`` and ``-p`` options\n together.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "The ``-R`` and ``-r`` options are synonymous. They enable directories,\n buckets, and bucket subdirectories to be copied recursively.\n If you don't use this option for an upload, gsutil copies objects\n it finds and skips directories. Similarly, if you don't\n specify this option for a download, gsutil copies\n objects at the current bucket directory level and skips subdirectories.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": " Specifies the storage class of the destination object. If not\n specified, the default storage class of the destination bucket\n is used. This option is not valid for copying to non-cloud destinations.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
},
"-v": {
"attr": {},
"category": "",
"default": "",
"description": "Prints the version-specific URL for each uploaded object. You can\n use these URLs to safely make concurrent upload requests, because\n Cloud Storage refuses to perform an update if the current\n object version doesn't match the version-specific URL. See\n `generation numbers\n `_\n for more details.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-v",
"nargs": "0",
"type": "bool",
"value": ""
},
"-z": {
"attr": {},
"category": "",
"default": "",
"description": " Applies gzip content-encoding to any file upload whose\n extension matches the ``-z`` extension list. This is useful when\n uploading files with compressible content such as .js, .css,\n or .html files, because it reduces network bandwidth and storage\n sizes. This can both improve performance and reduce costs.\n When you specify the ``-z`` option, the data from your files is\n compressed before it is uploaded, but your actual files are\n left uncompressed on the local disk. The uploaded objects\n retain the ``Content-Type`` and name of the original files, but\n have their ``Content-Encoding`` metadata set to ``gzip`` to\n indicate that the object data stored are compressed on the\n Cloud Storage servers and have their ``Cache-Control`` metadata\n set to ``no-transform``.\n For example, the following command:\n gsutil cp -z html \\\n cattypes.html tabby.jpeg gs://mycats\n does the following:\n - The ``cp`` command uploads the files ``cattypes.html`` and\n ``tabby.jpeg`` to the bucket ``gs://mycats``.\n - Based on the file extensions, gsutil sets the ``Content-Type``\n of ``cattypes.html`` to ``text/html`` and ``tabby.jpeg`` to\n ``image/jpeg``.\n - The ``-z`` option compresses the data in the file ``cattypes.html``.\n - The ``-z`` option also sets the ``Content-Encoding`` for\n ``cattypes.html`` to ``gzip`` and the ``Cache-Control`` for\n ``cattypes.html`` to ``no-transform``.\n Because the ``-z/-Z`` options compress data prior to upload, they\n are not subject to the same compression buffer bottleneck that\n can affect the ``-j/-J`` options.\n Note that if you download an object with ``Content-Encoding:gzip``,\n gsutil decompresses the content before writing the local file.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-z",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"cp"
],
"positionals": [],
"release": "GA",
"sections": {
"COMPOSITE UPLOADS": "gsutil can automatically use\n`object composition `_\nto perform uploads in parallel for large, local files being uploaded to\nCloud Storage. See the `parallel composite uploads documentation\n`_ for a\ncomplete discussion.",
"DESCRIPTION": "The ``gsutil cp`` command allows you to copy data between your local file\nsystem and the cloud, within the cloud, and between\ncloud storage providers. For example, to upload all text files from the\nlocal directory to a bucket, you can run:\n\n gsutil cp *.txt gs://my-bucket\n\nYou can also download data from a bucket. The following command downloads\nall text files from the top-level of a bucket to your current directory:\n\n gsutil cp gs://my-bucket/*.txt .\n\nYou can use the ``-n`` option to prevent overwriting the content of\nexisting files. The following example downloads text files from a bucket\nwithout clobbering the data in your directory:\n\n gsutil cp -n gs://my-bucket/*.txt .\n\nUse the ``-r`` option to copy an entire directory tree.\nFor example, to upload the directory tree ``dir``:\n\n gsutil cp -r dir gs://my-bucket\n\nIf you have a large number of files to transfer, you can perform a parallel\nmulti-threaded/multi-processing copy using the\ntop-level gsutil ``-m`` option (see \"gsutil help options\"):\n\n gsutil -m cp -r dir gs://my-bucket\n\nYou can use the ``-I`` option with ``stdin`` to specify a list of URLs to\ncopy, one per line. This allows you to use gsutil\nin a pipeline to upload or download objects as generated by a program:\n\n cat filelist | gsutil -m cp -I gs://my-bucket\n\nor:\n\n cat filelist | gsutil -m cp -I ./download_dir\n\nwhere the output of ``cat filelist`` is a list of files, cloud URLs, and\nwildcards of files and cloud URLs.\n\nNOTE: Shells like ``bash`` and ``zsh`` sometimes attempt to expand\nwildcards in ways that can be surprising. You may also encounter issues when\nattempting to copy files whose names contain wildcard characters. For more\ndetails about these issues, see `Wildcard behavior considerations\n`_.",
"HANDLING": "The ``cp`` command retries when failures occur, but if enough failures happen\nduring a particular copy or delete operation, or if a failure isn't retryable,\nthe ``cp`` command skips that object and moves on. If any failures were not\nsuccessfully retried by the end of the copy run, the ``cp`` command reports the\nnumber of failures and exits with a non-zero status.\n\nFor details about gsutil's overall retry handling, see `Retry strategy\n`_.",
"IN THE CLOUD AND METADATA PRESERVATION": "If both the source and destination URL are cloud URLs from the same\nprovider, gsutil copies data \"in the cloud\" (without downloading\nto and uploading from the machine where you run gsutil). In addition to\nthe performance and cost advantages of doing this, copying in the cloud\npreserves metadata such as ``Content-Type`` and ``Cache-Control``. In contrast,\nwhen you download data from the cloud, it ends up in a file with\nno associated metadata, unless you have some way to keep\nor re-create that metadata.\n\nCopies spanning locations and/or storage classes cause data to be rewritten\nin the cloud, which may take some time (but is still faster than\ndownloading and re-uploading). Such operations can be resumed with the same\ncommand if they are interrupted, so long as the command parameters are\nidentical.\n\nNote that by default, the gsutil ``cp`` command does not copy the object\nACL to the new object, and instead uses the default bucket ACL (see\n\"gsutil help defacl\"). You can override this behavior with the ``-p``\noption.\n\nWhen copying in the cloud, if the destination bucket has Object Versioning\nenabled, by default ``gsutil cp`` copies only live versions of the\nsource object. For example, the following command causes only the single live\nversion of ``gs://bucket1/obj`` to be copied to ``gs://bucket2``, even if there\nare noncurrent versions of ``gs://bucket1/obj``:\n\n gsutil cp gs://bucket1/obj gs://bucket2\n\nTo also copy noncurrent versions, use the ``-A`` flag:\n\n gsutil cp -A gs://bucket1/obj gs://bucket2\n\nThe top-level gsutil ``-m`` flag is not allowed when using the ``cp -A`` flag.",
"NAMES ARE CONSTRUCTED": "The ``gsutil cp`` command attempts to name objects in ways that are consistent with the\nLinux ``cp`` command. This means that names are constructed depending\non whether you're performing a recursive directory copy or copying\nindividually-named objects, or whether you're copying to an existing or\nnon-existent directory.\n\nWhen you perform recursive directory copies, object names are constructed to\nmirror the source directory structure starting at the point of recursive\nprocessing. For example, if ``dir1/dir2`` contains the file ``a/b/c``, then the\nfollowing command creates the object ``gs://my-bucket/dir2/a/b/c``:\n\n gsutil cp -r dir1/dir2 gs://my-bucket\n\nIn contrast, copying individually-named files results in objects named by\nthe final path component of the source files. For example, assuming again that\n``dir1/dir2`` contains ``a/b/c``, the following command creates the object\n``gs://my-bucket/c``:\n\n gsutil cp dir1/dir2/** gs://my-bucket\n\nNote that in the above example, the '**' wildcard matches all names\nanywhere under ``dir``. The wildcard '*' matches names just one level deep. For\nmore details, see `URI wildcards\n`_.\n\nThe same rules apply for uploads and downloads: recursive copies of buckets and\nbucket subdirectories produce a mirrored filename structure, while copying\nindividually or wildcard-named objects produce flatly-named files.\n\nIn addition, the resulting names depend on whether the destination subdirectory\nexists. For example, if ``gs://my-bucket/subdir`` exists as a subdirectory,\nthe following command creates the object ``gs://my-bucket/subdir/dir2/a/b/c``:\n\n gsutil cp -r dir1/dir2 gs://my-bucket/subdir\n\nIn contrast, if ``gs://my-bucket/subdir`` does not exist, this same ``gsutil cp``\ncommand creates the object ``gs://my-bucket/subdir/a/b/c``.\n\nNOTE: The\n`Google Cloud Platform Console `_\ncreates folders by creating \"placeholder\" objects that end\nwith a \"/\" character. gsutil skips these objects when downloading from the\ncloud to the local file system, because creating a file that\nends with a \"/\" is not allowed on Linux and macOS. We\nrecommend that you only create objects that end with \"/\" if you don't\nintend to download such objects using gsutil.",
"OBJECT DOWNLOADS": "gsutil can automatically use ranged ``GET`` requests to perform downloads in\nparallel for large files being downloaded from Cloud Storage. See `sliced object\ndownload documentation\n`_\nfor a complete discussion.",
"OVER OS-SPECIFIC FILE TYPES (SUCH AS SYMLINKS AND DEVICES)": "Please see the section about OS-specific file types in \"gsutil help rsync\".\nWhile that section refers to the ``rsync`` command, analogous\npoints apply to the ``cp`` command.",
"TEMP DIRECTORIES": "gsutil writes data to a temporary directory in several cases:\n\n- when compressing data to be uploaded (see the ``-z`` and ``-Z`` options)\n- when decompressing data being downloaded (for example, when the data has\n ``Content-Encoding:gzip`` as a result of being uploaded\n using gsutil cp -z or gsutil cp -Z)\n- when running integration tests using the gsutil test command\n\nIn these cases, it's possible the temporary file location on your system that\ngsutil selects by default may not have enough space. If gsutil runs out of\nspace during one of these operations (for example, raising\n\"CommandException: Inadequate temp space available to compress \"\nduring a ``gsutil cp -z`` operation), you can change where it writes these\ntemp files by setting the TMPDIR environment variable. On Linux and macOS,\nyou can set the variable as follows:\n\n TMPDIR=/some/directory gsutil cp ...\n\nYou can also add this line to your ~/.bashrc file and restart the shell\nbefore running gsutil:\n\n export TMPDIR=/some/directory\n\nOn Windows 7, you can change the TMPDIR environment variable from Start ->\nComputer -> System -> Advanced System Settings -> Environment Variables.\nYou need to reboot after making this change for it to take effect. Rebooting\nis not necessary after running the export command on Linux and macOS.",
"TO/FROM SUBDIRECTORIES; DISTRIBUTING TRANSFERS ACROSS MACHINES": "You can use gsutil to copy to and from subdirectories by using a command\nlike this:\n\n gsutil cp -r dir gs://my-bucket/data\n\nThis causes ``dir`` and all of its files and nested subdirectories to be\ncopied under the specified destination, resulting in objects with names like\n``gs://my-bucket/data/dir/a/b/c``. Similarly, you can download from bucket\nsubdirectories using the following command:\n\n gsutil cp -r gs://my-bucket/data dir\n\nThis causes everything nested under ``gs://my-bucket/data`` to be downloaded\ninto ``dir``, resulting in files with names like ``dir/data/a/b/c``.\n\nCopying subdirectories is useful if you want to add data to an existing\nbucket directory structure over time. It's also useful if you want\nto parallelize uploads and downloads across multiple machines (potentially\nreducing overall transfer time compared with running ``gsutil -m\ncp`` on one machine). For example, if your bucket contains this structure:\n\n gs://my-bucket/data/result_set_01/\n gs://my-bucket/data/result_set_02/\n ...\n gs://my-bucket/data/result_set_99/\n\nyou can perform concurrent downloads across 3 machines by running these\ncommands on each machine, respectively:\n\n gsutil -m cp -r gs://my-bucket/data/result_set_[0-3]* dir\n gsutil -m cp -r gs://my-bucket/data/result_set_[4-6]* dir\n gsutil -m cp -r gs://my-bucket/data/result_set_[7-9]* dir\n\nNote that ``dir`` could be a local directory on each machine, or a\ndirectory mounted off of a shared file server. The performance of the latter\ndepends on several factors, so we recommend experimenting\nto find out what works best for your computing environment.",
"TRANSFERS": "Use '-' in place of src_url or dst_url to perform a `streaming transfer\n`_.\n\nStreaming uploads using the `JSON API\n`_ are buffered\nin memory part-way back into the file and can thus sometimes resume in the event\nof network or service problems.\n\ngsutil does not support resuming streaming uploads using the XML API or\nresuming streaming downloads for either JSON or XML. If you have a large amount\nof data to transfer in these cases, we recommend that you write the data to a\nlocal file and copy that file rather than streaming it.",
"VALIDATION": "gsutil automatically performs checksum validation for copies to and from Cloud\nStorage. For more information, see `Hashes and ETags\n`_."
}
},
"defacl": {
"capsule": "Get, set, or change default ACL on buckets",
"commands": {
"ch": {
"capsule": "Get, set, or change default ACL on buckets",
"commands": {},
"flags": {
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Remove all roles associated with the matching entity.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Normally gsutil stops at the first error. The -f option causes\n to continue when it encounters errors. With this option the\nutil exit status will be 0 even if some ACLs couldn't be\nanged.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-g": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a group entity's role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-g",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a project viewers/editors/owners role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a user entity's role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"defacl",
"ch"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"defacl ch\" (or \"defacl change\") command updates the default object\naccess control list for a bucket. The syntax is shared with the \"acl ch\"\ncommand, so see the \"CH\" section of \"gsutil help acl\" for the full help\ndescription.",
"EXAMPLES": "Grant anyone on the internet READ access by default to any object created\nin the bucket example-bucket:\n\n gsutil defacl ch -u AllUsers:R gs://example-bucket\n\nNOTE: By default, publicly readable objects are served with a Cache-Control\nheader allowing such objects to be cached for 3600 seconds. If you need to\nensure that updates become visible immediately, you should set a\nCache-Control header of \"Cache-Control:private, max-age=0, no-transform\" on\nsuch objects. For help doing this, see \"gsutil help setmeta\".\n\nAdd the user john.doe@example.com to the default object ACL on bucket\nexample-bucket with READ access:\n\n gsutil defacl ch -u john.doe@example.com:READ gs://example-bucket\n\nAdd the group admins@example.com to the default object ACL on bucket\nexample-bucket with OWNER access:\n\n gsutil defacl ch -g admins@example.com:O gs://example-bucket\n\nRemove the group admins@example.com from the default object ACL on bucket\nexample-bucket:\n\n gsutil defacl ch -d admins@example.com gs://example-bucket\n\nAdd the owners of project example-project-123 to the default object ACL on\nbucket example-bucket with READ access:\n\n gsutil defacl ch -p owners-example-project-123:R gs://example-bucket\n\nNOTE: You can replace 'owners' with 'viewers' or 'editors' to grant access\nto a project's viewers/editors respectively."
}
},
"get": {
"capsule": "Get, set, or change default ACL on buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"defacl",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Gets the default ACL text for a bucket, which you can save and edit\nfor use with the \"defacl set\" command."
}
},
"set": {
"capsule": "Get, set, or change default ACL on buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"defacl",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"defacl set\" command sets default object ACLs for the specified buckets.\nIf you specify a default object ACL for a certain bucket, Google Cloud\nStorage applies the default object ACL to all new objects uploaded to that\nbucket, unless an ACL for that object is separately specified during upload.\n\nSimilar to the \"acl set\" command, the file-or-canned_acl_name names either a\ncanned ACL or the path to a file that contains ACL text. See \"gsutil help\nacl\" for examples of editing and setting ACLs via the acl command. See\n`Predefined ACLs\n`_\nfor a list of canned ACLs.\n\nSetting a default object ACL on a bucket provides a convenient way to ensure\nnewly uploaded objects have a specific ACL. If you don't set the bucket's\ndefault object ACL, it will default to project-private. If you then upload\nobjects that need a different ACL, you will need to perform a separate ACL\nupdate operation for each object. Depending on how many objects require\nupdates, this could be very time-consuming."
}
}
},
"flags": {
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Remove all roles associated with the matching entity.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Normally gsutil stops at the first error. The -f option causes\n to continue when it encounters errors. With this option the\nutil exit status will be 0 even if some ACLs couldn't be\nanged.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-g": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a group entity's role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-g",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a project viewers/editors/owners role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": "Add or modify a user entity's role.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"defacl"
],
"positionals": [],
"release": "GA",
"sections": {
"CH": "The \"defacl ch\" (or \"defacl change\") command updates the default object\naccess control list for a bucket. The syntax is shared with the \"acl ch\"\ncommand, so see the \"CH\" section of \"gsutil help acl\" for the full help\ndescription.",
"DESCRIPTION": "The defacl command has three sub-commands:",
"EXAMPLES": "Grant anyone on the internet READ access by default to any object created\nin the bucket example-bucket:\n\n gsutil defacl ch -u AllUsers:R gs://example-bucket\n\nNOTE: By default, publicly readable objects are served with a Cache-Control\nheader allowing such objects to be cached for 3600 seconds. If you need to\nensure that updates become visible immediately, you should set a\nCache-Control header of \"Cache-Control:private, max-age=0, no-transform\" on\nsuch objects. For help doing this, see \"gsutil help setmeta\".\n\nAdd the user john.doe@example.com to the default object ACL on bucket\nexample-bucket with READ access:\n\n gsutil defacl ch -u john.doe@example.com:READ gs://example-bucket\n\nAdd the group admins@example.com to the default object ACL on bucket\nexample-bucket with OWNER access:\n\n gsutil defacl ch -g admins@example.com:O gs://example-bucket\n\nRemove the group admins@example.com from the default object ACL on bucket\nexample-bucket:\n\n gsutil defacl ch -d admins@example.com gs://example-bucket\n\nAdd the owners of project example-project-123 to the default object ACL on\nbucket example-bucket with READ access:\n\n gsutil defacl ch -p owners-example-project-123:R gs://example-bucket\n\nNOTE: You can replace 'owners' with 'viewers' or 'editors' to grant access\nto a project's viewers/editors respectively.",
"GET": "Gets the default ACL text for a bucket, which you can save and edit\nfor use with the \"defacl set\" command.",
"SET": "The \"defacl set\" command sets default object ACLs for the specified buckets.\nIf you specify a default object ACL for a certain bucket, Google Cloud\nStorage applies the default object ACL to all new objects uploaded to that\nbucket, unless an ACL for that object is separately specified during upload.\n\nSimilar to the \"acl set\" command, the file-or-canned_acl_name names either a\ncanned ACL or the path to a file that contains ACL text. See \"gsutil help\nacl\" for examples of editing and setting ACLs via the acl command. See\n`Predefined ACLs\n`_\nfor a list of canned ACLs.\n\nSetting a default object ACL on a bucket provides a convenient way to ensure\nnewly uploaded objects have a specific ACL. If you don't set the bucket's\ndefault object ACL, it will default to project-private. If you then upload\nobjects that need a different ACL, you will need to perform a separate ACL\nupdate operation for each object. Depending on how many objects require\nupdates, this could be very time-consuming."
}
},
"defstorageclass": {
"capsule": "Get or set the default storage class on buckets",
"commands": {
"get": {
"capsule": "Get or set the default storage class on buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"defstorageclass",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Gets the default storage class for a bucket."
}
},
"set": {
"capsule": "Get or set the default storage class on buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"defstorageclass",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"defstorageclass set\" command sets the default\n`storage class `_ for\nthe specified bucket(s). If you specify a default storage class for a certain\nbucket, Google Cloud Storage applies the default storage class to all new\nobjects uploaded to that bucket, except when the storage class is overridden\nby individual upload requests.\n\nSetting a default storage class on a bucket provides a convenient way to\nensure newly uploaded objects have a specific storage class. If you don't set\nthe bucket's default storage class, it will default to Standard."
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"defstorageclass"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The defstorageclass command has two sub-commands:",
"GET": "Gets the default storage class for a bucket.",
"SET": "The \"defstorageclass set\" command sets the default\n`storage class `_ for\nthe specified bucket(s). If you specify a default storage class for a certain\nbucket, Google Cloud Storage applies the default storage class to all new\nobjects uploaded to that bucket, except when the storage class is overridden\nby individual upload requests.\n\nSetting a default storage class on a bucket provides a convenient way to\nensure newly uploaded objects have a specific storage class. If you don't set\nthe bucket's default storage class, it will default to Standard."
}
},
"du": {
"capsule": "Display object size usage",
"commands": {},
"flags": {
"-0": {
"attr": {},
"category": "",
"default": "",
"description": "Ends each output line with a 0 byte rather than a newline. You\nn use this to make the output machine-readable.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-0",
"nargs": "0",
"type": "bool",
"value": ""
},
"-X": {
"attr": {},
"category": "",
"default": "",
"description": "Similar to ``-e``, but excludes patterns from the given file. The\ntterns to exclude should be listed one per line.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-X",
"nargs": "0",
"type": "bool",
"value": ""
},
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Includes both live and noncurrent object versions. Also prints the\nneration and metageneration number for each listed object. If\nis flag is not specified, only live object versions are included.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-c": {
"attr": {},
"category": "",
"default": "",
"description": "Includes a total size at the end of the output.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-c",
"nargs": "0",
"type": "bool",
"value": ""
},
"-e": {
"attr": {},
"category": "",
"default": "",
"description": "Exclude a pattern from the report. Example: -e \"*.o\"\ncludes any object that ends in \".o\". Can be specified multiple\nmes.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-h": {
"attr": {},
"category": "",
"default": "",
"description": "Prints object sizes in human-readable format. For example, ``1 KiB``,\n234 MiB``, or ``2GiB``.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-h",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": "Displays only the total size for each argument, omitting the list of\ndividual objects.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"du"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The du command displays the amount of space in bytes used up by the\nobjects in a bucket, subdirectory, or project. The syntax emulates\nthe Linux ``du -b`` command, which reports the disk usage of files and subdirectories.\nFor example, the following command reports the total space used by all objects and\nsubdirectories under gs://your-bucket/dir:\n\n gsutil du -s -a gs://your-bucket/dir",
"EXAMPLES": "To list the size of each object in a bucket:\n\n gsutil du gs://bucketname\n\nTo list the size of each object in the ``prefix`` subdirectory:\n\n gsutil du gs://bucketname/prefix/*\n\nTo include the total number of bytes in human-readable form:\n\n gsutil du -ch gs://bucketname\n\nTo see only the summary of the total number of (live) bytes in two given\nbuckets:\n\n gsutil du -s gs://bucket1 gs://bucket2\n\nTo list the size of each object in a bucket with `Object Versioning\n`_ enabled,\nincluding noncurrent objects:\n\n gsutil du -a gs://bucketname\n\nTo list the size of each object in a bucket, except objects that end in \".bak\",\nwith each object printed ending in a null byte:\n\n gsutil du -e \"*.bak\" -0 gs://bucketname\n\nTo list the size of each bucket in a project and the total size of the\nproject:\n\n gsutil -o GSUtil:default_project_id=project-name du -shc"
}
},
"hash": {
"capsule": "Calculate file hashes",
"commands": {},
"flags": {
"-c": {
"attr": {},
"category": "",
"default": "",
"description": "Calculate a CRC32c hash for the specified files.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-c",
"nargs": "0",
"type": "bool",
"value": ""
},
"-h": {
"attr": {},
"category": "",
"default": "",
"description": "Output hashes in hex format. By default, gsutil uses base64.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-h",
"nargs": "0",
"type": "bool",
"value": ""
},
"-m": {
"attr": {},
"category": "",
"default": "",
"description": "Calculate a MD5 hash for the specified files.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-m",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"hash"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Calculate hashes on local files, which can be used to compare with\n``gsutil ls -L`` output. If a specific hash option is not provided, this\ncommand calculates all gsutil-supported hashes for the files.\n\nNote that gsutil automatically performs hash validation when uploading or\ndownloading files, so this command is only needed if you want to write a\nscript that separately checks the hash.\n\nIf you calculate a CRC32c hash for files without a precompiled crcmod\ninstallation, hashing will be very slow. See \"gsutil help crcmod\" for details."
}
},
"help": {
"capsule": "",
"commands": {
"acls": {
"capsule": "Working With Access Control Lists",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"acls"
],
"positionals": [],
"release": "GA",
"sections": {
"ACCESSING PUBLIC OBJECTS": "Objects with public READ access can be accessed anonymously by gsutil, via\na browser, or via Cloud Storage APIs. For more details on accessing public\nobjects, see:\n\n https://cloud.google.com/storage/docs/access-public-data",
"ACL JSON": "When you use a canned ACL, it is translated into an JSON representation\nthat can later be retrieved and edited to specify more fine-grained\ndetail about who can read and write buckets and objects. By running\nthe \"gsutil acl get\" command you can retrieve the ACL JSON, and edit it to\ncustomize the permissions.\n\nAs an example, if you create an object in a bucket that has no default\nobject ACL set and then retrieve the ACL on the object, it will look\nsomething like this:\n\n[\n {\n \"entity\": \"group-00b4903a9740e42c29800f53bd5a9a62a2f96eb3f64a4313a115df3f3a776bf7\",\n \"entityId\": \"00b4903a9740e42c29800f53bd5a9a62a2f96eb3f64a4313a115df3f3a776bf7\",\n \"role\": \"OWNER\"\n },\n {\n \"entity\": \"group-00b4903a977fd817e9da167bc81306489181a110456bb635f466d71cf90a0d51\",\n \"entityId\": \"00b4903a977fd817e9da167bc81306489181a110456bb635f466d71cf90a0d51\",\n \"role\": \"OWNER\"\n },\n {\n \"entity\": \"00b4903a974898cc8fc309f2f2835308ba3d3df1b889d3fc7e33e187d52d8e71\",\n \"entityId\": \"00b4903a974898cc8fc309f2f2835308ba3d3df1b889d3fc7e33e187d52d8e71\",\n \"role\": \"READER\"\n }\n]\n\nThe ACL consists collection of elements, each of which specifies an Entity\nand a Role. Entities are the way you specify an individual or group of\nindividuals, and Roles specify what access they're permitted.\n\nThis particular ACL grants OWNER to two groups (which means members\nof those groups are allowed to read the object and read and write the ACL),\nand READ permission to a third group. The project groups are (in order)\nthe project owners group, editors group, and viewers group.\n\nThe 64 digit hex identifiers (following any prefixes like \"group-\") used in\nthis ACL are called canonical IDs. They are used to identify predefined\ngroups associated with the project that owns the bucket: the Project Owners,\nProject Editors, and All Project Team Members groups. For more information\nthe permissions and roles of these project groups, see \"gsutil help projects\".\n\nHere's an example of an ACL specified using the group-by-email and\ngroup-by-domain entities:\n\n\n{\n \"entity\": \"group-travel-companion-owners@googlegroups.com\"\n \"email\": \"travel-companion-owners@googlegroups.com\",\n \"role\": \"OWNER\",\n}\n{\n \"domain\": \"example.com\",\n \"entity\": \"domain-example.com\"\n \"role\": \"READER\",\n},\n\n\nThis ACL grants members of an email group OWNER, and grants READ\naccess to any user in a domain (which must be a Google Apps for Business\ndomain). By applying email group grants to a collection of objects\nyou can edit access control for large numbers of objects at once via\nhttp://groups.google.com. That way, for example, you can easily and quickly\nchange access to a group of company objects when employees join and leave\nyour company (i.e., without having to individually change ACLs across\npotentially millions of objects).",
"BUCKET VS OBJECT ACLS": "In Google Cloud Storage, the bucket ACL works as follows:\n\n- Users granted READ access are allowed to list the bucket contents and read\n bucket metadata other than its ACL.\n\n- Users granted WRITE access are allowed READ access and also are allowed to\n write and delete objects in that bucket, including overwriting previously\n written objects.\n\n- Users granted OWNER access are allowed WRITE access and also are allowed to\n read and write the bucket's ACL.\n\nThe object ACL works as follows:\n\n- Users granted READ access are allowed to read the object's data and\n metadata.\n\n- Users granted OWNER access are allowed READ access and also are allowed to\n read and write the object's ACL.\n\nA couple of points are worth noting, that sometimes surprise users:\n\n1. There is no WRITE access for objects; attempting to set an ACL with WRITE\n permission for an object will result in an error.\n\n2. The bucket ACL plays no role in determining who can read objects; only the\n object ACL matters for that purpose. This is different from how things\n work in Linux file systems, where both the file and directory permission\n control file read access. It also means, for example, that someone with\n OWNER over the bucket may not have read access to objects in the bucket.\n This is by design, and supports useful cases. For example, you might want\n to set up bucket ownership so that a small group of administrators have\n OWNER on the bucket (with the ability to delete data to control storage\n costs), but not grant those users read access to the object data (which\n might be sensitive data that should only be accessed by a different\n specific group of users).",
"CANNED ACLS": "The simplest way to set an ACL on a bucket or object is using a \"canned\nACL\". The available canned ACLs are:\n\nproject-private\n Gives permission to the project team based on their roles. Anyone who is\n part of the team has READ permission, and project owners and project editors\n have OWNER permission. This is the default ACL for newly created\n buckets. This is also the default ACL for newly created objects unless the\n default object ACL for that bucket has been changed. For more details see\n \"gsutil help projects\".\n\nprivate\n Gives the requester (and only the requester) OWNER permission for a\n bucket or object.\n\npublic-read\n Gives all users (whether logged in or anonymous) READ permission. When\n you apply this to an object, anyone on the Internet can read the object\n without authenticating.\n\n NOTE: By default, publicly readable objects are served with a Cache-Control\n header allowing such objects to be cached for 3600 seconds. If you need to\n ensure that updates become visible immediately, you should set a\n Cache-Control header of \"Cache-Control:private, max-age=0, no-transform\" on\n such objects. For help doing this, see 'gsutil help setmeta'.\n\n NOTE: Setting a bucket ACL to public-read will remove all OWNER and WRITE\n permissions from everyone except the project owner group. Setting an object\n ACL to public-read will remove all OWNER and WRITE permissions from\n everyone except the object owner. For this reason, we recommend using\n the \"acl ch\" command to make these changes; see \"gsutil help acl ch\" for\n details.\n\npublic-read-write\n Gives all users READ and WRITE permission. This ACL applies only to buckets.\n NOTE: Setting a bucket to public-read-write will allow anyone on the\n Internet to upload anything to your bucket. You will be responsible for this\n content.\n\n NOTE: Setting a bucket ACL to public-read-write will remove all OWNER\n permissions from everyone except the project owner group. Setting an object\n ACL to public-read-write will remove all OWNER permissions from\n everyone except the object owner. For this reason, we recommend using\n the \"acl ch\" command to make these changes; see \"gsutil help acl ch\" for\n details.\n\nauthenticated-read\n Gives the requester OWNER permission and gives all authenticated\n Google account holders READ permission.\n\nbucket-owner-read\n Gives the requester OWNER permission and gives the bucket owner READ\n permission. This is used only with objects.\n\nbucket-owner-full-control\n Gives the requester OWNER permission and gives the bucket owner\n OWNER permission. This is used only with objects.",
"DESCRIPTION": "Access Control Lists (ACLs) allow you to control who can read and write\nyour data, and who can read and write the ACLs themselves.\n\nIf not specified at the time an object is uploaded (e.g., via the gsutil cp\n-a option), objects will be created with a default object ACL set on the\nbucket (see \"gsutil help defacl\"). You can replace the ACL on an object\nor bucket using the \"gsutil acl set\" command, or\nmodify the existing ACL using the \"gsutil acl ch\" command (see \"gsutil help\nacl\").",
"SHARING SCENARIOS": "For more detailed examples how to achieve various useful sharing use\ncases see https://cloud.google.com/storage/docs/collaboration"
}
},
"crc32c": {
"capsule": "CRC32C and Installing crcmod",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"crc32c"
],
"positionals": [],
"release": "GA",
"sections": {
"CONFIGURATION": "To determine if the compiled version of crcmod is available in your Python\nenvironment, you can inspect the output of the ``gsutil version`` command for\nthe \"compiled crcmod\" entry:\n\n $ gsutil version -l\n ...\n compiled crcmod: True\n ...\n\nIf your crcmod library is compiled to a native binary, this value will be\nTrue. If using the pure-Python version, the value will be False.\n\nTo control gsutil's behavior in response to crcmod's status, you can set the\n``check_hashes`` variable in your `boto configuration file\n`_. For details on this\nvariable, see the surrounding comments in your boto configuration file. If\n``check_hashes`` is not present in your configuration file, regenerate the\nfile by running ``gsutil config`` with the appropriate ``-e`` or ``-a`` flag.",
"DESCRIPTION": "Google Cloud Storage provides a cyclic redundancy check (CRC) header that\nallows clients to verify the integrity of object contents. For non-composite\nobjects Google Cloud Storage also provides an MD5 header to allow clients to\nverify object integrity, but for composite objects only the CRC is available.\ngsutil automatically performs integrity checks on all uploads and downloads.\nAdditionally, you can use the ``gsutil hash`` command to calculate a CRC for\nany local file.\n\nThe CRC variant used by Google Cloud Storage is called CRC32C (Castagnoli),\nwhich is not available in the standard Python distribution. The implementation\nof CRC32C used by gsutil is provided by a third-party Python module called\n`crcmod `_.\n\nThe crcmod module contains a pure-Python implementation of CRC32C, but using\nit results in slow checksum computation and subsequently very poor\nperformance. A Python C extension is also provided by crcmod, which requires\ncompiling into a binary module for use. gsutil ships with a precompiled\ncrcmod C extension for macOS; for other platforms, see the installation\ninstructions below.\n\nAt the end of each copy operation, the ``gsutil cp``, ``gsutil mv``, and\n``gsutil rsync`` commands validate that the checksum of the source\nfile/object matches the checksum of the destination file/object. If the\nchecksums do not match, gsutil will delete the invalid copy and print a\nwarning message. This very rarely happens, but if it does, you should\nretry the operation.",
"INSTALLATION": "These installation instructions assume that:\n\n- You have ``pip`` installed. Consult the `pip installation instructions\n `_ for details on how\n to install ``pip``.\n- Your installation of ``pip`` can be found in your ``PATH`` environment\n variable. If it cannot, you may need to replace ``pip3`` in the commands\n below with the full path to the executable.\n- You are installing the crcmod package for use with your system installation\n of Python, and thus use the ``sudo`` command. If installing crcmod for a\n different Python environment (e.g. in a virtualenv), you should omit\n ``sudo`` from the commands below.\n- You are using a Python 3 version with gsutil. You can determine which\n Python version gsutil is using by running ``gsutil version -l`` and looking\n for the ``python version: 2.x.x`` or ``python version: 3.x.x`` line.\n\nCentOS, RHEL, and Fedora\n------------------------\n\nTo compile and install crcmod:\n\n yum install gcc python3-devel python3-setuptools redhat-rpm-config\n sudo pip3 uninstall crcmod\n sudo pip3 install --no-cache-dir -U crcmod\n\nDebian and Ubuntu\n-----------------\n\nTo compile and install crcmod:\n\n sudo apt-get install gcc python3-dev python3-setuptools\n sudo pip3 uninstall crcmod\n sudo pip3 install --no-cache-dir -U crcmod\n\nEnterprise SUSE\n-----------------\n\nTo compile and install crcmod when using Enterprise SUSE for SAP 12:\n\n sudo zypper install gcc python-devel\n sudo pip uninstall crcmod\n sudo pip install --no-cache-dir -U crcmod\n\nTo compile and install crcmod when using Enterprise SUSE for SAP 15:\n\n sudo zypper install gcc python3-devel\n sudo pip uninstall crcmod\n sudo pip install --no-cache-dir -U crcmod\n\nmacOS\n-----\n\ngsutil distributes a pre-compiled version of crcmod for macOS, so you shouldn't\nneed to compile and install it yourself. If for some reason the pre-compiled\nversion is not being detected, please let the Google Cloud Storage team know\n(see ``gsutil help support``).\n\nTo compile manually on macOS, you will first need to install\n`Xcode `_ and then run:\n\n pip3 install -U crcmod\n\nWindows\n-------\n\nAn installer is available for the compiled version of crcmod from the Python\nPackage Index (PyPi) at the following URL:\n\nhttps://pypi.python.org/pypi/crcmod/1.7\n\nNOTE: If you have installed crcmod and gsutil hasn't detected it, it may have\nbeen installed to the wrong directory. It should be located at\n\\files\\Lib\\site-packages\\crcmod\\\n\nIn some cases the installer will incorrectly install to\n\\Lib\\site-packages\\crcmod\\\n\nManually copying the crcmod directory to the correct location should resolve\nthe issue."
}
},
"creds": {
"capsule": "Credential Types Supporting Various Use Cases",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"creds"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "gsutil currently supports several types of credentials/authentication, as\nwell as the ability to `access public data anonymously\n`_. Each of these\ntype of credentials is discussed in more detail below, along with\ninformation about configuring and using credentials via the Cloud SDK.",
"SUPPORTED CREDENTIAL TYPES": "gsutil supports several types of credentials (the specific subset depends on\nwhich distribution of gsutil you are using; see above discussion).\n\nOAuth2 User Account:\n This type of credential can be used for authenticating requests on behalf of\n a specific user (which is probably the most common use of gsutil). This is\n the default type of credential that is created when you run ``gcloud init``.\n This credential type is not supported for stand-alone versions of gsutil.\n For more details about OAuth2 authentication, see:\n https://developers.google.com/accounts/docs/OAuth2#scenarios\n\nHMAC:\n This type of credential can be used by programs that are implemented using\n HMAC authentication, which is an authentication mechanism supported by\n certain other cloud storage service providers. This type of credential can\n also be used for interactive use when moving data to/from service providers\n that support HMAC credentials. This is the type of credential that is\n created when you run ``gsutil config -a``.\n\n Note that it's possible to set up HMAC credentials for both Google Cloud\n Storage and another service provider; or to set up OAuth2 user account\n credentials for Google Cloud Storage and HMAC credentials for another\n service provider. To do so, after you run the ``gcloud init`` command, you\n can edit the generated ~/.boto config file and look for comments for where\n other credentials can be added.\n\n For more details about HMAC authentication, see\n https://developers.google.com/storage/docs/reference/v1/getting-startedv1#keys\n\nOAuth2 Service Account:\n This is the preferred type of credential to use when authenticating on\n behalf of a service or application (as opposed to a user). For example, if\n you intend to run gsutil out of a nightly cron job to upload/download data,\n using a service account means the cron job does not depend on credentials of\n an individual employee at your company. This is the type of credential that\n is configured when you run ``gcloud auth activate-service-account`` (or\n ``gsutil config -e`` when using stand-alone versions of gsutil).\n\n It is important to note that a service account is considered an Editor by\n default for the purposes of API access, rather than an Owner. In particular,\n the fact that Editors have OWNER access in the default object and\n bucket ACLs, but the canned ACL options remove OWNER access from\n Editors, can lead to unexpected results. The solution to this problem is to\n use \"gsutil acl ch\" instead of \"gsutil acl set \" to change\n permissions on a bucket.\n\n To set up a service account for use with\n ``gcloud auth activate-service-account`` or ``gsutil config -e``, see\n https://cloud.google.com/storage/docs/authentication#generating-a-private-key\n\n For more details about OAuth2 service accounts, see\n https://developers.google.com/accounts/docs/OAuth2ServiceAccount\n\n For further information about account roles, see\n https://developers.google.com/console/help/#DifferentRoles\n\nCompute Engine Internal Service Account:\n This is the type of service account used for accounts hosted by App Engine\n or Compute Engine. Such credentials are created automatically for\n you on Compute Engine when you run the ``gcloud compute instances create``\n command and the credentials can be controlled with the ``--scopes`` flag.\n\n For more details about using service account credentials for authenticating workloads\n on Compute Engine, see\n https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances.\n\n For more details about App Engine service accounts, see\n https://developers.google.com/appengine/docs/python/appidentity/overview\n\nService Account Impersonation:\n Impersonating a service account is useful in scenarios where you need to\n grant short-term access to specific resources. For example, if you have a\n bucket of sensitive data that is typically read-only and want to\n temporarily grant write access through a trusted service account.\n\n You can specify which service account to use for impersonation by running\n ``gsutil -i``, ``gsutil config`` and editing the boto configuration file, or\n ``gcloud config set auth/impersonate_service_account [service_account_email_address]``.\n\n In order to impersonate, your original credentials need to be granted\n roles/iam.serviceAccountTokenCreator on the target service account.\n For more information see\n https://cloud.google.com/iam/docs/creating-short-lived-service-account-credentials\n\nExternal Account Credentials (Workload Identity Federation):\n Using workload identity federation, you can access Google Cloud resources\n from Amazon Web Services (AWS), Microsoft Azure or any identity provider\n that supports OpenID Connect (OIDC) or SAML 2.0.\n\n For more information see\n https://cloud.google.com/iam/docs/using-workload-identity-federation"
}
},
"dev": {
"capsule": "Contributing Code to gsutil",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"dev"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "We're open to incorporating gsutil code changes authored by users. Here\nare some guidelines:\n\n1. Before we can accept code submissions, we have to jump a couple of legal\n hurdles. Please fill out either the individual or corporate Contributor\n License Agreement:\n\n - If you are an individual writing original source code and you're\n sure you own the intellectual property,\n then you'll need to sign an individual CLA\n (https://cla.developers.google.com/about/google-individual).\n - If you work for a company that wants to allow you to contribute your\n work to gsutil, then you'll need to sign a corporate CLA\n (https://cla.developers.google.com/about/google-corporate)\n\n Follow either of the two links above to access the appropriate CLA and\n instructions for how to sign and return it. Once we receive it, we'll\n add you to the official list of contributors and be able to accept\n your patches.\n\n2. If you found a bug or have an idea for a feature enhancement, we suggest\n you check https://github.com/GoogleCloudPlatform/gsutil/issues to see if it\n has already been reported by another user. From there you can also\n subscribe to updates to the issue.\n\n If a GitHub issue doesn't already exist, create one about your idea before\n sending actual code. Often we can discuss the idea and help propose things\n that could save you later revision work.\n\n3. We tend to avoid adding command line options that are of use to only\n a very small fraction of users, especially if there's some other way\n to accommodate such needs. Adding such options complicates the code and\n also adds overhead to users having to read through an \"alphabet soup\"\n list of option documentation.\n\n4. While gsutil has a number of features specific to Google Cloud Storage,\n it can also be used with other cloud storage providers. We're open to\n including changes for making gsutil support features specific to other\n providers, as long as those changes don't make gsutil work worse for Google\n Cloud Storage. If you do make such changes we recommend including someone\n with knowledge of the specific provider as a code reviewer (see below).\n\n5. You can check out the gsutil code from the GitHub repository:\n\n https://github.com/GoogleCloudPlatform/gsutil\n\n To clone a read-only copy of the repository:\n\n git clone git://github.com/GoogleCloudPlatform/gsutil.git\n\n To push your own changes to GitHub, click the Fork button on the\n repository page and clone the repository from your own fork.\n\n6. The gsutil git repository uses git submodules to pull in external modules.\n After checking out the repository, make sure to also pull the submodules\n by entering into the gsutil top-level directory and run:\n\n git submodule update --init --recursive\n\n7. Please make sure to run all tests against your modified code. To\n do this, change directories into the gsutil top-level directory and run:\n\n ./gsutil test\n\n The above tests take a long time to run because they send many requests to\n the production service. The gsutil test command has a -u argument that will\n only run unit tests. These run quickly, as they are executed with an\n in-memory mock storage service implementation. To run only the unit tests,\n run:\n\n ./gsutil test -u\n\n If you made changes to boto, please run the boto tests. For these tests you\n need to use HMAC credentials (from gsutil config -a), because the boto test\n suite doesn't import the OAuth2 handler. You'll also need to install some\n Python modules. Change directories into the boto root directory at\n third_party/boto and run:\n\n pip install -r requirements.txt\n\n (You probably need to run this command using sudo.)\n Make sure each of the individual installations succeeded. If they don't\n you may need to run the install command again.\n\n Then ensure your .boto file has HMAC credentials defined, and then change\n directories into boto's tests directory and run:\n\n python test.py unit\n python test.py -t s3 -t gs -t ssl\n\n8. Please consider contributing test code for your change, especially if the\n change impacts any of the core gsutil code (like the gsutil cp command).\n\n9. Please run the yapf linter with the config files in the root of the GitHub\n repository:\n\n yapf -irp .\n\n10. When it's time to send us code, please submit a PR to the `gsutil GitHub\n repository `_. For help on\n making GitHub PRs, please refer to this\n `GitHub help document `_."
}
},
"encoding": {
"capsule": "Filename encoding and interoperability problems",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"encoding"
],
"positionals": [],
"release": "GA",
"sections": {
"CONVERTING FILENAMES TO UNICODE": "Open-source tools are available to convert filenames for non-Unicode files.\nFor example, to convert from latin1 (a common Windows encoding) to Unicode,\nyou can use\n`Windows iconv `_.\nFor Unix-based systems, you can use\n`libiconv `_.",
"CROSS-PLATFORM ENCODING PROBLEMS OF WHICH TO BE AWARE": "Using UTF-8 for all object names and filenames will ensure that gsutil doesn't\nencounter character encoding errors while operating on the files.\nUnfortunately, it's still possible that files uploaded / downloaded this way\ncan have interoperability problems, for a number of reasons unrelated to\ngsutil. For example:\n\n- Windows filenames are case-insensitive, while Google Cloud Storage, Linux,\n and macOS are not. Thus, for example, if you have two filenames on Linux\n differing only in case and upload both to Google Cloud Storage and then\n subsequently download them to Windows, you will end up with just one file\n whose contents came from the last of these files to be written to the\n filesystem.\n- macOS performs character encoding decomposition based on tables stored in\n the OS, and the tables change between Unicode versions. Thus the encoding\n used by an external library may not match that performed by the OS. It is\n possible that two object names may translate to a single local filename.\n- Windows console support for Unicode is difficult to use correctly.\n\nFor a more thorough list of such issues see `this presentation\n`_\n\nThese problems mostly arise when sharing data across platforms (e.g.,\nuploading data from a Windows machine to Google Cloud Storage, and then\ndownloading from Google Cloud Storage to a machine running macOS).\nUnfortunately these problems are a consequence of the lack of a filename\nencoding standard, and users need to be aware of the kinds of problems that\ncan arise when copying filenames across platforms.\n\nThere is one precaution users can exercise to prevent some of these problems:\nWhen using the Windows console specify wildcards or folders (using the -R\noption) rather than explicitly named individual files.",
"DESCRIPTION": "To reduce the chance for `filename encoding interoperability problems\n`_\ngsutil uses `UTF-8 `_ character encoding\nwhen uploading and downloading files. Because UTF-8 is in widespread (and\ngrowing) use, for most users nothing needs to be done to use UTF-8. Users with\nfiles stored in other encodings (such as\n`Latin 1 `_) must convert those\nfilenames to UTF-8 before attempting to upload the files.\n\nThe most common place where users who have filenames that use some other\nencoding encounter a gsutil error is while uploading files using the recursive\n(-R) option on the gsutil cp , mv, or rsync commands. When this happens you'll\nget an error like this:\n\n CommandException: Invalid Unicode path encountered\n ('dir1/dir2/file_name_with_\\xf6n_bad_chars').\n gsutil cannot proceed with such files present.\n Please remove or rename this file and try again.\n\nNote that the invalid Unicode characters have been hex-encoded in this error\nmessage because otherwise trying to print them would result in another\nerror.\n\nIf you encounter such an error you can either remove the problematic file(s)\nor try to rename them and re-run the command. If you have a modest number of\nsuch files the simplest thing to do is to think of a different name for the\nfile and manually rename the file (using local filesystem tools). If you have\ntoo many files for that to be practical, you can use a bulk rename tool or\nscript.\n\nUnicode errors for valid Unicode filepaths can be caused by lack of Python\nlocale configuration on Linux and Mac OSes. If your file paths are Unicode\nand you get encoding errors, ensure the LANG environment variable is set\ncorrectly. Typically, the LANG variable should be set to something like\n\"en_US.UTF-8\" or \"de_DE.UTF-8\".\n\nNote also that there's no restriction on the character encoding used in file\ncontent - it can be UTF-8, a different encoding, or non-character\ndata (like audio or video content). The gsutil UTF-8 character encoding\nrequirement applies only to filenames.",
"USING UNICODE FILENAMES ON MACOS": "macOS stores filenames in decomposed form (also known as\n`NFD normalization `_).\nFor example, if a filename contains an accented \"e\" character, that character\nwill be converted to an \"e\" followed by an accent before being saved to the\nfilesystem. As a consequence, it's possible to have different name strings\nfor files uploaded from an operating system that doesn't enforce decomposed\nform (like Ubuntu) from one that does (like macOS).\n\nThe following example shows how this behavior could lead to unexpected\nresults. Say you create a file with non-ASCII characters on Ubuntu. Ubuntu\nstores that filename in its composed form. When you upload the file to the\ncloud, it is stored as named. But if you use gsutil rysnc to bring the file to\na macOS machine and edit the file, then when you use gsutil rsync to bring\nthis version back to the cloud, you end up with two different objects, instead\nof replacing the original. This is because macOS converted the filename to\na decomposed form, and Cloud Storage sees this as a different object name.",
"USING UNICODE FILENAMES ON WINDOWS": "Windows support for Unicode in the command shell (cmd.exe or powershell) is\nsomewhat painful, because Windows uses a Windows-specific character encoding\ncalled `cp1252 `_. To use Unicode\ncharacters you need to run this command in the command shell before the first\ntime you use gsutil in that shell:\n\n chcp 65001\n\nIf you neglect to do this before using gsutil, the progress messages while\nuploading files with Unicode names or listing buckets with Unicode object\nnames will look garbled (i.e., with different glyphs than you expect in the\noutput). If you simply run the chcp command and re-run the gsutil command, the\noutput should no longer look garbled.\n\ngsutil attempts to translate between cp1252 encoding and UTF-8 in the main\nplaces that Unicode encoding/decoding problems have been encountered to date\n(traversing the local file system while uploading files, and printing Unicode\nnames while listing buckets). However, because gsutil must perform\ntranslation, it is likely there are other erroneous edge cases when using\nWindows with Unicode. If you encounter problems, you might consider instead\nusing cygwin (on Windows) or Linux or macOS - all of which support Unicode."
}
},
"metadata": {
"capsule": "Working With Object Metadata",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"metadata"
],
"positionals": [],
"release": "GA",
"sections": {
"CURRENTLY SET METADATA": "You can see what metadata is currently set on an object by using:\n\n gsutil ls -L gs://the_bucket/the_object",
"FIELDS; FIELD VALUES": "You can't set some metadata fields, such as ETag and Content-Length. The\nfields you can set are:\n\n- ``Cache-Control``\n- ``Content-Disposition``\n- ``Content-Encoding``\n- ``Content-Language``\n- ``Content-Type``\n- ``Custom-Time``\n- Custom metadata\n\nField names are case-insensitive.\n\nAll fields and their values must consist only of ASCII characters, with the\nexception of values for ``x-goog-meta-`` fields, which may contain arbitrary\nUnicode values. Note that when setting metadata using the XML API, which sends\ncustom metadata as HTTP headers, Unicode characters are encoded using\nUTF-8, then url-encoded to ASCII. For example:\n\n gsutil setmeta -h \"x-goog-meta-foo: ã\" gs://bucket/object\n\nstores the custom metadata key-value pair of ``foo`` and ``%C3%A3``.\nSubsequently, running ``ls -L`` using the JSON API to list the object's\nmetadata prints ``%C3%A3``, while ``ls -L`` using the XML API\nurl-decodes this value automatically, printing the character ``ã``.",
"OF METADATA": "Objects can have associated metadata, which control aspects of how\nGET requests are handled, including ``Content-Type``, ``Cache-Control``,\n``Content-Disposition``, and ``Content-Encoding``. In addition, you can\nset custom ``key:value`` metadata for use by your applications. For a\ndiscussion of specific metadata properties, see the `metadata concept\npage `_.\n\nThere are two ways to set metadata on objects:\n\n- At upload time you can specify one or more metadata properties to\n associate with objects, using the ``gsutil -h option``. For example,\n the following command would cause gsutil to set the ``Content-Type`` and\n ``Cache-Control`` for each of the files being uploaded from a local\n directory named ``images``:\n\n gsutil -h \"Content-Type:text/html\" \\\n -h \"Cache-Control:public, max-age=3600\" cp -r images \\\n gs://bucket/images\n\n Note that -h is an option on the gsutil command, not the cp sub-command.\n\n- You can set or remove metadata fields from already uploaded objects using\n the ``gsutil setmeta`` command. See \"gsutil help setmeta\"."
}
},
"naming": {
"capsule": "Object and Bucket Naming",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"naming"
],
"positionals": [],
"release": "GA",
"sections": {
"NAME REQUIREMENTS": "Object names can contain any sequence of Unicode characters, of length 1-1024\nbytes when UTF-8 encoded. Object names must not contain CarriageReturn,\nCarriageReturnLineFeed, or the XML-disallowed surrogate blocks (xFFFE\nor xFFFF).\n\nWe strongly recommend that you abide by the following object naming\nconventions:\n\n- Avoid using control characters that are illegal in XML 1.0 in your object\n names (#x7F-#x84 and #x86-#x9F). These characters will cause XML listing\n issues when you try to list your objects.\n\n- Avoid using \"#\" in your object names. gsutil interprets object names ending\n with # as version identifiers, so including \"#\" in object\n names can make it difficult or impossible to perform various operations on\n such objects using gsutil (see 'gsutil help versions').\n\n- Avoid using \"[\", \"]\", \"*\", or \"?\" in your object names. gsutil interprets\n these characters as wildcards, so including any of these characters in\n object names can make it difficult or impossible to perform various wildcard\n operations using gsutil (see 'gsutil help wildcards').\n\nSee also 'gsutil help encoding' about file/object name encoding requirements\nand potential interoperability concerns.",
"NAMED BUCKETS": "You can carve out parts of the Google Cloud Storage bucket name space\nby creating buckets with domain names (like \"example.com\").\n\nBefore you can create a bucket name containing one or more '.' characters,\nthe following rules apply:\n\n- If the name is a syntactically valid DNS name ending with a\n currently-recognized top-level domain (such as .com), you will be required\n to verify domain ownership.\n- Otherwise you will be disallowed from creating the bucket.\n\nIf your project needs to use a domain-named bucket, you need to have\na team member both verify the domain and create the bucket. This is\nbecause Google Cloud Storage checks for domain ownership against the\nuser who creates the bucket, so the user who creates the bucket must\nalso be verified as an owner or manager of the domain.\n\nTo verify as the owner or manager of a domain, use the Google Webmaster\nTools verification process. The Webmaster Tools verification process\nprovides three methods for verifying an owner or manager of a domain:\n\n1. Adding a special Meta tag to a site's homepage.\n2. Uploading a special HTML file to a site.\n3. Adding a DNS TXT record to a domain's DNS configuration.\n\nMeta tag verification and HTML file verification are easier to perform and\nare probably adequate for most situations. DNS TXT record verification is\na domain-based verification method that is useful in situations where a\nsite wants to tightly control who can create domain-named buckets. Once\na site creates a DNS TXT record to verify ownership of a domain, it takes\nprecedence over meta tag and HTML file verification. For example, you might\nhave two IT staff members who are responsible for managing your site, called\n\"example.com.\" If they complete the DNS TXT record verification, only they\nwould be able to create buckets called \"example.com\", \"reports.example.com\",\n\"downloads.example.com\", and other domain-named buckets.\n\nSite-Based Verification\n-----------------------\n\nIf you have administrative control over the HTML files that make up a site,\nyou can use one of the site-based verification methods to verify that you\ncontrol or own a site. When you do this, Google Cloud Storage lets you\ncreate buckets representing the verified site and any sub-sites - provided\nnobody has used the DNS TXT record method to verify domain ownership of a\nparent of the site.\n\nAs an example, assume that nobody has used the DNS TXT record method to verify\nownership of the following domains: abc.def.example.com, def.example.com,\nand example.com. In this case, Google Cloud Storage lets you create a bucket\nnamed abc.def.example.com if you verify that you own or control any of the\nfollowing sites:\n\n http://abc.def.example.com\n http://def.example.com\n http://example.com\n\nDomain-Based Verification\n-------------------------\n\nIf you have administrative control over a domain's DNS configuration, you can\nuse the DNS TXT record verification method to verify that you own or control a\ndomain. When you use the domain-based verification method to verify that you\nown or control a domain, Google Cloud Storage lets you create buckets that\nrepresent any subdomain under the verified domain. Furthermore, Google Cloud\nStorage prevents anybody else from creating buckets under that domain unless\nyou add their name to the list of verified domain owners or they have verified\ntheir domain ownership by using the DNS TXT record verification method.\n\nFor example, if you use the DNS TXT record verification method to verify your\nownership of the domain example.com, Google Cloud Storage will let you create\nbucket names that represent any subdomain under the example.com domain, such\nas abc.def.example.com, example.com/music/jazz, or abc.example.com/music/jazz.\n\nUsing the DNS TXT record method to verify domain ownership supersedes\nverification by site-based verification methods. For example, if you\nuse the Meta tag method or HTML file method to verify domain ownership\nof http://example.com, but someone else uses the DNS TXT record method\nto verify ownership of the example.com domain, Google Cloud Storage will\nnot allow you to create a bucket named example.com. To create the bucket\nexample.com, the domain owner who used the DNS TXT method to verify domain\nownership must add you to the list of verified domain owners for example.com.\n\nThe DNS TXT record verification method is particularly useful if you manage\na domain for a large organization that has numerous subdomains because it\nlets you control who can create buckets representing those domain names.\n\nNote: If you use the DNS TXT record verification method to verify ownership of\na domain, you cannot create a CNAME record for that domain. RFC 1034 disallows\ninclusion of any other resource records if there is a CNAME resource record\npresent. If you want to create a CNAME resource record for a domain, you must\nuse the Meta tag verification method or the HTML file verification method."
}
},
"options": {
"capsule": "Global Command Line Options",
"commands": {},
"flags": {
"-D": {
"attr": {},
"category": "",
"default": "",
"description": "Shows HTTP requests/headers and additional debug info needed\nen posting support requests, including exception stack traces.\nUTION: The output from using this flag includes authentication\nedentials. Before including this flag in your command, be sure\nu understand how the command's output is used, and, if\ncessary, remove or redact sensitive information.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-D",
"nargs": "0",
"type": "bool",
"value": ""
},
"-DD": {
"attr": {},
"category": "",
"default": "",
"description": "Same as -D, plus HTTP upstream payload.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-DD",
"nargs": "0",
"type": "bool",
"value": ""
},
"-h": {
"attr": {},
"category": "",
"default": "",
"description": "Allows you to specify certain HTTP headers, for example:\ngsutil -h \"Cache-Control:public,max-age=3600\" \\\n -h \"Content-Type:text/html\" cp ...\nte that you need to quote the headers/values that\nntain spaces (such as \"Content-Disposition: attachment;\nlename=filename.ext\"), to avoid having the shell split them\nto separate arguments.\ne following headers are stored as object metadata and used\n future requests on the object:\nCache-Control\nContent-Disposition\nContent-Encoding\nContent-Language\nContent-Type\ne following headers are used to check data integrity:\nContent-MD5\nutil also supports custom metadata headers with a matching\noud Storage Provider prefix, such as:\nx-goog-meta-\nte that for gs:// URLs, the Cache Control header is specific to\ne API being used. The XML API accepts any cache control headers\nd returns them during object downloads. The JSON API respects\nly the public, private, no-cache, max-age, and no-transform\nche control headers.\ne \"gsutil help setmeta\" for the ability to set metadata\nelds on objects after they have been uploaded.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-h",
"nargs": "0",
"type": "bool",
"value": ""
},
"-i": {
"attr": {},
"category": "",
"default": "",
"description": "Allows you to use the configured credentials to impersonate a\nrvice account, for example:\ngsutil -i \"service-account@google.com\" ls gs://pub\nte that this setting will be ignored by the XML API and S3. See\nsutil help creds' for more information on impersonating service\ncounts.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-i",
"nargs": "0",
"type": "bool",
"value": ""
},
"-m": {
"attr": {},
"category": "",
"default": "",
"description": "Causes supported operations (acl ch, acl set, cp, mv, rm, rsync,\nd setmeta) to run in parallel. This can significantly improve\nrformance if you are performing operations on a large number of\nles over a reasonably fast network connection.\nutil performs the specified operation using a combination of\nlti-threading and multi-processing. The number of threads\nd processors are determined by ``parallel_thread_count`` and\nparallel_process_count``, respectively. These values are set in\ne .boto configuration file or specified in individual requests\nth the ``-o`` top-level flag. Because gsutil has no built-in\npport for throttling requests, you should experiment with these\nlues. The optimal values can vary based on a number of factors,\ncluding network speed, number of CPUs, and available memory.\ning the -m option can consume a significant amount of network\nndwidth and cause problems or make your performance worse if\nu use a slower network. For example, if you start a large rsync\neration over a network link that's also used by a number of\nher important jobs, there could be degraded performance in\nose jobs. Similarly, the -m option can make your performance\nrse, especially for cases that perform all operations locally,\ncause it can \"thrash\" your local disk.\n prevent such issues, reduce the values for\nparallel_thread_count`` and ``parallel_process_count``, or stop\ning the -m option entirely. One tool that you can use to limit\nw much I/O capacity gsutil consumes and prevent it from\nnopolizing your local disk is `ionice\nttp://www.tutorialspoint.com/unix_commands/ionice.htm>`_\nuilt in to many Linux systems). For example, the following\nmmand reduces the I/O priority of gsutil so it doesn't\nnopolize your local disk:\nionice -c 2 -n 7 gsutil -m rsync -r ./dir gs://some bucket\n a download or upload operation using parallel transfer fails\nfore the entire transfer is complete (e.g. failing after 300 of\n00 files have been transferred), you must restart the entire\nansfer.\nso, although most commands normally fail upon encountering an\nror when the -m flag is disabled, all commands continue to try\nl operations when -m is enabled with multiple threads or\nocesses, and the number of failed operations (if any) are\nported as an exception at the end of the command's execution.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-m",
"nargs": "0",
"type": "bool",
"value": ""
},
"-o": {
"attr": {},
"category": "",
"default": "",
"description": "Set/override values in the `boto configuration file\nttps://cloud.google.com/storage/docs/boto-gsutil>`_, in the\nrmat ``:=``. For examnple,\ngsutil -o \"GSUtil:parallel_thread_count=4\" ...``. This does not\nss the option to gsutil integration tests.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-o",
"nargs": "0",
"type": "bool",
"value": ""
},
"-q": {
"attr": {},
"category": "",
"default": "",
"description": "Causes gsutil to perform operations quietly, i.e., without\nporting progress indicators of files being copied or removed,\nc. Errors are still reported. This option can be useful for\nnning gsutil from a cron job that logs its output to a file, for\nich the only information desired in the log is failures.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-q",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": "Allows you to specify the ID or number of a user project to be\nlled for the request. For example:\ngsutil -u \"bill-this-project\" cp ...",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"options"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "gsutil supports separate options for the top-level gsutil command and\nthe individual sub-commands (like cp, rm, etc.) The top-level options\ncontrol behavior of gsutil that apply across commands. For example, in\nthe command:\n\n gsutil -m cp -p file gs://bucket/obj\n\nthe -m option applies to gsutil, while the -p option applies to the cp\nsub-command."
}
},
"prod": {
"capsule": "Scripting Production Transfers",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"prod"
],
"positionals": [],
"release": "GA",
"sections": {
"BACKGROUND ON RESUMABLE TRANSFERS": "First, it's helpful to understand gsutil's resumable transfer mechanism,\nand how your script needs to be implemented around this mechanism to work\nreliably. gsutil uses resumable transfer support when you attempt to download\na file of any size or to upload a file larger than a configurable threshold\n(by default, this threshold is 8 MiB). If a transfer fails partway through\n(e.g., because of an intermittent network problem), gsutil uses a\n`truncated randomized binary exponential backoff-and-retry strategy\n`_ that by\ndefault retries transfers up to 23 times over a 10 minute period of time. If\nthe transfer fails each of these attempts with no intervening progress,\ngsutil gives up on the transfer, but keeps a \"tracker\" file for it in a\nconfigurable location (the default location is ~/.gsutil/, in a file named\nby a combination of the SHA1 hash of the name of the bucket and object being\ntransferred and the last 16 characters of the file name). When transfers\nfail in this fashion, you can rerun gsutil at some later time (e.g., after\nthe networking problem has been resolved), and the resumable transfer picks\nup where it left off.",
"DESCRIPTION": "If you use gsutil in large production tasks (such as uploading or\ndownloading many GiBs of data each night), there are a number of things\nyou can do to help ensure success. Specifically, this section discusses\nhow to script large production tasks around gsutil's resumable transfer\nmechanism.",
"SCRIPTING DATA TRANSFER TASKS": "To script large production data transfer tasks around this mechanism,\nyou can implement a script that runs periodically, determines which file\ntransfers have not yet succeeded, and runs gsutil to copy them. Below,\nwe offer a number of suggestions about how this type of scripting should\nbe implemented:\n\n1. When resumable transfers fail without any progress 23 times in a row\n over the course of up to 10 minutes, it probably won't work to simply\n retry the transfer immediately. A more successful strategy would be to\n have a cron job that runs every 30 minutes, determines which transfers\n need to be run, and runs them. If the network experiences intermittent\n problems, the script picks up where it left off and will eventually\n succeed (once the network problem has been resolved).\n\n2. If your business depends on timely data transfer, you should consider\n implementing some network monitoring. For example, you can implement\n a task that attempts a small download every few minutes and raises an\n alert if the attempt fails for several attempts in a row (or more or less\n frequently depending on your requirements), so that your IT staff can\n investigate problems promptly. As usual with monitoring implementations,\n you should experiment with the alerting thresholds, to avoid false\n positive alerts that cause your staff to begin ignoring the alerts.\n\n3. There are a variety of ways you can determine what files remain to be\n transferred. We recommend that you avoid attempting to get a complete\n listing of a bucket containing many objects (e.g., tens of thousands\n or more). One strategy is to structure your object names in a way that\n represents your transfer process, and use gsutil prefix wildcards to\n request partial bucket listings. For example, if your periodic process\n involves downloading the current day's objects, you could name objects\n using a year-month-day-object-ID format and then find today's objects by\n using a command like gsutil ls \"gs://bucket/2011-09-27-*\". Note that it\n is more efficient to have a non-wildcard prefix like this than to use\n something like gsutil ls \"gs://bucket/*-2011-09-27\". The latter command\n actually requests a complete bucket listing and then filters in gsutil,\n while the former asks Google Storage to return the subset of objects\n whose names start with everything up to the \"*\".\n\n For data uploads, another technique would be to move local files from a \"to\n be processed\" area to a \"done\" area as your script successfully copies\n files to the cloud. You can do this in parallel batches by using a command\n like:\n\n gsutil -m cp -r to_upload/subdir_$i gs://bucket/subdir_$i\n\n where i is a shell loop variable. Make sure to check the shell $status\n variable is 0 after each gsutil cp command, to detect if some of the copies\n failed, and rerun the affected copies.\n\n With this strategy, the file system keeps track of all remaining work to\n be done.\n\n4. If you have really large numbers of objects in a single bucket\n (say hundreds of thousands or more), you should consider tracking your\n objects in a database instead of using bucket listings to enumerate\n the objects. For example this database could track the state of your\n downloads, so you can determine what objects need to be downloaded by\n your periodic download script by querying the database locally instead\n of performing a bucket listing.\n\n5. Make sure you don't delete partially downloaded temporary files after a\n transfer fails: gsutil picks up where it left off (and performs a hash\n of the final downloaded content to ensure data integrity), so deleting\n partially transferred files will cause you to lose progress and make\n more wasteful use of your network.\n\n6. If you have a fast network connection, you can speed up the transfer of\n large numbers of files by using the gsutil -m (multi-threading /\n multi-processing) option. Be aware, however, that gsutil doesn't attempt to\n keep track of which files were downloaded successfully in cases where some\n files failed to download. For example, if you use multi-threaded transfers\n to download 100 files and 3 failed to download, it is up to your scripting\n process to determine which transfers didn't succeed, and retry them. A\n periodic check-and-run approach like outlined earlier would handle this\n case.\n\n If you use parallel transfers (gsutil -m) you might want to experiment with\n the number of threads being used (via the parallel_thread_count setting\n in the .boto config file). By default, gsutil uses 10 threads for Linux\n and 24 threads for other operating systems. Depending on your network\n speed, available memory, CPU load, and other conditions, this may or may\n not be optimal. Try experimenting with higher or lower numbers of threads\n to find the best number of threads for your environment."
}
},
"security": {
"capsule": "Security and Privacy Considerations",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"security"
],
"positionals": [],
"release": "GA",
"sections": {
"ACCESS CONTROL LISTS": "Unless you specify a different ACL (e.g., via the gsutil cp -a option), by\ndefault objects written to a bucket use the default object ACL on that bucket.\nUnless you modify that ACL (e.g., via the gsutil defacl command), by default\nit will allow all project editors write access to the object and read/write\naccess to the object's metadata and will allow all project viewers read\naccess to the object.\n\nThe Google Cloud Storage access control system includes the ability to\nspecify that objects are publicly readable. Make sure you intend for any\nobjects you write with this permission to be public. Once \"published\", data\non the Internet can be copied to many places, so it's effectively impossible\nto regain read control over an object written with this permission.\n\nThe Google Cloud Storage access control system includes the ability to\nspecify that buckets are publicly writable. While configuring a bucket this\nway can be convenient for various purposes, we recommend against using this\npermission - it can be abused for distributing illegal content, viruses, and\nother malware, and the bucket owner is legally and financially responsible\nfor the content stored in their buckets. If you need to make content\navailable to customers who don't have Google accounts consider instead using\nsigned URLs (see \"gsutil help signurl\").",
"DATA PRIVACY": "Google will never ask you to share your credentials, password, or other\nsecurity-sensitive information. Beware of potential phishing scams where\nsomeone attempts to impersonate Google and asks for such information.",
"DESCRIPTION": "This help section provides details about various precautions taken by gsutil\nto protect data security, as well as recommendations for how customers should\nsafeguard security.",
"ENCRYPTION AT REST": "All Google Cloud Storage data are automatically stored in an encrypted state,\nbut you can also provide your own encryption keys. For more information, see\n`Cloud Storage Encryption\n`_.",
"LOCAL FILE STORAGE SECURITY": "gsutil takes a number of precautions to protect against security exploits in\nthe files it stores locally:\n\n- When the ``gcloud init``, ``gsutil config -a``, or ``gsutil config -e``\n commands run, they set file protection mode 600 (\"-rw-------\") on the .boto\n configuration file they generate, so only the user (or superuser) can read\n it. This is important because these files contain security-sensitive\n information, including credentials and proxy configuration.\n\n- These commands also use file protection mode 600 for the private key file\n stored locally when you create service account credentials.\n\n- The default level of logging output from gsutil commands does not include\n security-sensitive information, such as OAuth2 tokens and proxy\n configuration information. (See the \"RECOMMENDED USER PRECAUTIONS\" section\n below if you increase the level of debug output, using the gsutil -D\n option.)\n\nNote that protection modes are not supported on Windows, so if you\nuse gsutil on Windows we recommend using an encrypted file system and strong\naccount passwords.",
"MEASUREMENT DATA": "The gsutil perfdiag command collects a variety of performance-related\nmeasurements and details about your local system and network environment, for\nuse in troubleshooting performance problems. None of this information will be\nsent to Google unless you choose to send it.",
"PROXY USAGE": "gsutil supports access via proxies, such as Squid and a number of commercial\nproducts. A full description of their capabilities is beyond the scope of this\ndocumentation, but proxies can be configured to support many security-related\nfunctions, including virus scanning, Data Leakage Prevention, control over\nwhich certificates/CA's are trusted, content type filtering, and many more\ncapabilities. Some of these features can slow or block legitimate gsutil\nbehavior. For example, virus scanning depends on decrypting file content,\nwhich in turn requires that the proxy terminate the gsutil connection and\nestablish a new connection - and in some cases proxies will rewrite content in\nways that result in checksum validation errors and other problems.\n\nFor details on configuring proxies, see the proxy help text generated in your\n.boto configuration file by the ``gcloud init``, ``gsutil -a``, and\n``gsutil -e`` commands.",
"RECOMMENDED USER PRECAUTIONS": "The first and foremost precaution is: Never share your credentials. Each user\nshould have distinct credentials.\n\nIf you run gsutil -D (to generate debugging output) it will include OAuth2\nrefresh and access tokens in the output. Make sure to redact this information\nbefore sending this debug output to anyone during troubleshooting/tech support\ninteractions.\n\nIf you run gsutil --trace-token (to send a trace directly to Google),\nsensitive information like OAuth2 tokens and the contents of any files\naccessed during the trace may be included in the content of the trace.\n\nCustomer-supplied encryption key information in the .boto configuration is\nsecurity sensitive.\n\nThe proxy configuration information in the .boto configuration is\nsecurity-sensitive, especially if your proxy setup requires user and\npassword information. Even if your proxy setup doesn't require user and\npassword, the host and port number for your proxy is often considered\nsecurity-sensitive. Protect access to your .boto configuration file.\n\nIf you are using gsutil from a production environment (e.g., via a cron job\nrunning on a host in your data center), use service account credentials rather\nthan individual user account credentials. These credentials were designed for\nsuch use and, for example, protect you from losing access when an employee\nleaves your company.",
"SECURITY-SENSITIVE FILES WRITTEN TEMPORARILY TO DISK BY GSUTIL": "gsutil buffers data in temporary files in several situations:\n\n- While compressing data being uploaded via gsutil cp -z/-Z, gsutil\n buffers the data in temporary files with protection 600, which it\n deletes after the upload is complete (similarly for downloading files\n that were uploaded with gsutil cp -z/-Z or some other process that sets the\n Content-Encoding to \"gzip\"). However, if you kill the gsutil process\n while the upload is under way the partially written file will be left\n in place. See the \"CHANGING TEMP DIRECTORIES\" section in\n \"gsutil help cp\" for details of where the temporary files are written\n and how to change the temp directory location.\n\n- When performing a resumable upload gsutil stores the upload ID (which,\n as noted above, is a bearer token and thus should be safe-guarded) in a\n file under ~/.gsutil/tracker-files with protection 600, and deletes this\n file after the upload is complete. However, if the upload doesn't\n complete successfully the tracker file is left in place so the resumable\n upload can be re-attempted later. Over time it's possible to accumulate\n these tracker files from aborted upload attempts, though resumable\n upload IDs are only valid for 1 week, so the security risk only exists\n for files less than that age. If you consider the risk of leaving\n aborted upload IDs in the tracker directory too high you could modify\n your upload scripts to delete the tracker files; or you could create a\n cron job to clear the tracker directory periodically.\n\n- The gsutil rsync command stores temporary files (with protection 600)\n containing the names, sizes, and checksums of source and destination\n directories/buckets, which it deletes after the rsync is complete.\n However, if you kill the gsutil process while the rsync is under way the\n listing files will be left in place.\n\nNote that gsutil deletes temporary files using the standard OS unlink system\ncall, which does not perform `data wiping\n`_. Thus, the content of such\ntemporary files can be recovered by a determined adversary.",
"SOFTWARE INTEGRITY AND UPDATES": "gsutil is distributed as a part of the bundled Cloud SDK release. This\ndistribution method takes a variety of security precautions to protect the\nintegrity of the software. We strongly recommend against getting a copy of\ngsutil from any other sources (such as mirror sites).",
"TRANSPORT LAYER SECURITY": "gsutil performs all operations using transport-layer encryption (HTTPS), to\nprotect against data leakage over shared network links. This is also important\nbecause gsutil uses \"bearer tokens\" for authentication (OAuth2) as well as for\nresumable upload identifiers, and such tokens must be protected from being\neavesdropped and reused.\n\ngsutil also supports the older HMAC style of authentication via the XML API\n(see `gsutil endpoints\n`_). While\nHMAC authentication does not use bearer tokens (and thus is not subject to\neavesdropping/replay attacks), it's still important to encrypt data traffic.\n\nTo add an extra layer of security, gsutil supports mutual TLS (mTLS) for\nthe Cloud Storage JSON API. With mTLS, the client verifies the server\ncertificate, and the server also verifies the client.\nTo find out more about how to enable mTLS, see the `install docs\n`_."
}
},
"shim": {
"capsule": "Shim for Running gcloud storage",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"shim"
],
"positionals": [],
"release": "GA",
"sections": {
"AVAILABLE COMMANDS": "The gcloud storage CLI only supports a subset of gsutil commands. What follows\nis a list of commands supported by the shim with any differences in behavior\nnoted.\n\nacl\n------------------------\n\n- The ``ch`` subcommand is not supported.\n\nautoclass\n------------------------\n\n- Works as expected.\n\nbucketpolicyonly\n------------------------\n\n- Works as expected.\n\ncat\n------------------------\n\n- Prints object data for a second object even if the first object is invalid.\n\ncompose\n------------------------\n\n- Works as expected.\n\ncors\n------------------------\n\n- ``get`` subcommand prints \"[]\" instead of \"gs://[bucket name] has no CORS\n configuration\".\n\ncp\n------------------------\n\n- Copies a second object even if the first object is invalid.\n- Does not support file to file copies.\n- Supports copying objects cloud-to-cloud with trailing slashes in the name.\n- The all-version flag (``-A``) silently enables sequential execution rather\n than raising an error.\n\ndefacl\n------------------------\n\n- The ``ch`` subcommand is not supported.\n\ndefstorageclass\n------------------------\n\n- Works as expected.\n\nhash\n------------------------\n\n- In gsutil, the ``-m`` and ``-c`` flags that affect which hashes are displayed\n are ignored for cloud objects. This behavior is fixed for the shim and gcloud\n storage.\n\niam\n------------------------\n\n- The ``ch`` subcommand is not supported.\n- The ``-f`` flag will continue on any error, not just API errors.\n\n\nkms\n------------------------\n\n- The authorize subcommand returns informational messages in a different\n format.\n- The encryption subcommand returns informational messages in a different\n format.\n\nlabels\n------------------------\n- ``get`` subcommand prints \"[]\" instead of \"gs://[bucket name] has no labels\n configuration.\"\n\nlifecycle\n------------------------\n\n- Works as expected.\n\nlogging\n------------------------\n\n- The get subcommand has different JSON spacing and doesn't print an\n informational message if no configuration is found.\n\nls\n------------------------\n\n- Works as expected.\n\nmb\n------------------------\n- Works as expected.\n\nmv\n------------------------\n\n- See notes on cp.\n\nnotification\n------------------------\n\n- The list subcommand prints configuration information as YAML.\n- The delete subcommand offers progress tracking and parallelization.\n\npap\n------------------------\n\n- Works as expected.\n\nrb\n------------------------\n\n- Works as expected.\n\nrequesterpays\n------------------------\n\n- Works as expected.\n\nrewrite\n------------------------\n\n- The -k flag does not throw an error if called without a new key. In both the\n shim and unshimmed cases, the old key is maintained.\n\nrm\n------------------------\n\n- ``$folder$`` delete markers are not supported.\n\nrpo\n------------------------\n\n- Works as expected.\n\nsetmeta\n------------------------\n\n- Does not throw an error if no headers are changed.\n\nstat\n------------------------\n\n- Includes a field \"Storage class update time:\" which may throw off tabbing.\n\nubla\n------------------------\n\n- Works as expected.\n\nversioning\n------------------------\n\n- Works as expected.\n\nweb\n------------------------\n\n- The get subcommand has different JSON spacing and doesn't print an\n informational message if no configuration is found.",
"BOTO CONFIGURATION": "Configuration found in the boto file is mapped 1:1 to gcloud environment\nvariables where appropriate.\n\n[Credentials]\n------------------------\n\n- aws_access_key_id: AWS_ACCESS_KEY_ID\n- aws_secret_access_key: AWS_SECRET_ACCESS_KEY\n- use_client_certificate: CLOUDSDK_CONTEXT_AWARE_USE_CLIENT_CERTIFICATE\n\n[Boto]\n------------------------\n\n- proxy: CLOUDSDK_PROXY_ADDRESS\n- proxy_type: CLOUDSDK_PROXY_TYPE\n- proxy_port: CLOUDSDK_PROXY_PORT\n- proxy_user: CLOUDSDK_PROXY_USERNAME\n- proxy_pass: CLOUDSDK_PROXY_PASSWORD\n- proxy_rdns: CLOUDSDK_PROXY_RDNS\n- http_socket_timeout: CLOUDSDK_CORE_HTTP_TIMEOUT\n- ca_certificates_file: CLOUDSDK_CORE_CUSTOM_CA_CERTS_FILE\n- max_retry_delay: CLOUDSDK_STORAGE_BASE_RETRY_DELAY\n- num_retries: CLOUDSDK_STORAGE_MAX_RETRIES\n\n[GSUtil]\n------------------------\n\n- check_hashes: CLOUDSDK_STORAGE_CHECK_HASHES\n- default_project_id: CLOUDSDK_CORE_PROJECT\n- disable_analytics_prompt: CLOUDSDK_CORE_DISABLE_USAGE_REPORTING\n- use_magicfile: CLOUDSDK_STORAGE_USE_MAGICFILE\n- parallel_composite_upload_threshold: CLOUDSDK_STORAGE_PARALLEL_COMPOSITE_UPLOAD_THRESHOLD\n- resumable_threshold: CLOUDSDK_STORAGE_RESUMABLE_THRESHOLD\n\n[OAuth2]\n------------------------\n\n- client_id: CLOUDSDK_AUTH_CLIENT_ID\n- client_secret: CLOUDSDK_AUTH_CLIENT_SECRET\n- provider_authorization_uri: CLOUDSDK_AUTH_AUTH_HOST\n- provider_token_uri: CLOUDSDK_AUTH_TOKEN_HOST",
"DESCRIPTION": "Cloud SDK includes a new CLI, gcloud storage, that can be considerably faster\nthan gsutil when performing uploads and downloads with less parameter\ntweaking. This new CLI has a syntax and command structure that is familiar to\ngsutil users but is fundamentally different in many important ways. To ease\ntransition to this new CLI, gsutil provides a shim that translates your gsutil\ncommands to gcloud storage commands if an equivalent exists, and falls back to\ngsutil's usual behavior if an equivalent does not exist.",
"GENERAL COMPATIBILITY NOTES": "- Due to its compatibility across all major platforms, multiprocessing is\n enabled for all commands by default (equivalent to the -m option always\n being included in gsutil).\n- A sequence of asterisks greater than 2 (i.e. ``***``) are always treated as\n a single asterisk.\n- Unlike gsutil, gcloud is not designed to be used in parallel invocations,\n and doing so (i.e. running the shim from 2 terminals at once) can lead to\n unpredictable behavior.\n- Assuming a bucket contains an object ``gs://bucket/nested/foo.txt``,\n gsutil's wildcard iterator will match ``foo.txt`` given a URL like\n ``gs://bucket/*/nested/*``. The shim will not match ``foo.txt`` given the\n same URL.\n- This will be updated as new commands are supported by both gcloud storage\n and the shim.",
"TO ENABLE": "Set ``use_gcloud_storage=True`` in the ``.boto`` config file under the\n``[GSUtil]`` section:\n\n [GSUtil]\n use_gcloud_storage=True\n\nYou can also set the flag for individual commands using the top-level ``-o``\nflag:\n\n gsutil -o \"GSUtil:use_gcloud_storage=True\" -m cp -p file gs://bucket/obj"
}
},
"support": {
"capsule": "Google Cloud Storage Support",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"support"
],
"positionals": [],
"release": "GA",
"sections": {
"AND ACCOUNT QUESTIONS": "A) For billing documentation, please visit\nhttps://cloud.google.com/storage/pricing.\nIf you want to cancel billing, follow the instructions at\n`Cloud Storage FAQ `_.\nCaution: When you disable billing, you also disable the Google Cloud Storage\nservice. Make sure you want to disable the Google Cloud Storage service\nbefore you disable billing.\n\nB) For support regarding billing, please see\n`billing support `_.\nFor other questions regarding your account, Terms Of Service, Google\nCloud Console, or other administration-related questions please see\n`Google Cloud Platform support `_.",
"DESCRIPTION": "If you have any questions or encounter any problems with Google Cloud Storage,\nplease first read the `FAQ `_.\n\nIf you still have questions please use one of the following methods as\nappropriate, providing the details noted below:\n\nA) For API, tool usage, or other software development-related questions,\nplease search for and post questions on Stack Overflow, using the official\n`google-cloud-storage tag\n`_. Our\nsupport team actively monitors questions to this tag and we'll do our best to\nrespond.\n\nB) For gsutil bugs or feature requests, please check if there is already a\n`existing GitHub issue `_\nthat covers your request. If not, create a\n`new GitHub issue `_.\n\nTo help us diagnose any issues you encounter, when creating a new issue\nplease provide these details in addition to the description of your problem:\n\n- The resource you are attempting to access (bucket name, object name),\n assuming they are not sensitive.\n- The operation you attempted (GET, PUT, etc.)\n- The time and date (including timezone) at which you encountered the problem\n- If you can use gsutil to reproduce your issue, specify the -D option to\n display your request's HTTP details, and provide these details in the\n issue.\n\nWarning: The gsutil -d, -D, and -DD options will also print the authentication\nheader with authentication credentials for your Google Cloud Storage account.\nMake sure to remove any \"Authorization:\" headers before you post HTTP details\nto the issue. Note also that if you upload files large enough to use resumable\nuploads, the resumable upload IDs are security-sensitive while an upload\nis not yet complete, so should not be posted on public forums.\n\nIf you make any local modifications to gsutil, please make sure to use\na released copy of gsutil (instead of your locally modified copy) when\nproviding the gsutil -D output noted above. We cannot support versions\nof gsutil that include local modifications. (However, we're open to user\ncontributions; see \"gsutil help dev\".)"
}
},
"versions": {
"capsule": "Object Versioning and Concurrency Control",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"help",
"versions"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Versioning-enabled buckets maintain noncurrent versions of objects, providing\na way to un-delete data that you accidentally deleted, or to retrieve older\nversions of your data. Noncurrent objects are ignored by gsutil commands\nunless you indicate it should do otherwise by setting a relevant command flag\nor by including a specific generation number in your command. For example,\nwildcards like ``*`` and ``**`` do not, by themselves, act on noncurrent\nobject versions.\n\nWhen using gsutil cp, you cannot specify a version-specific URL as the\ndestination, because writes to Cloud Storage always create a new version.\nTrying to specify a version-specific URL as the destination of ``gsutil cp``\nresults in an error. When you specify a noncurrent object as a source in a\ncopy command, you always create a new object version and retain the original\n(even when using the command to restore a live version). You can use the\n``gsutil mv`` command to simultaneously restore an object version and remove\nthe noncurrent copy that was used as the source.\n\nYou can turn versioning on or off for a bucket at any time. Turning\nversioning off leaves existing object versions in place and simply causes\nthe bucket to delete the existing live version of the object whenever a new\nversion is uploaded.\n\nRegardless of whether you have enabled versioning on a bucket, every object\nhas two associated positive integer fields:\n\n- the generation, which is updated when a new object replaces an existing\n object with the same name. Note that there is no guarantee that generation\n numbers increase for successive versions, only that each new version has a\n unique generation number.\n- the metageneration, which identifies the metadata generation. It starts\n at 1; is updated every time the metadata (e.g., ACL or Content-Type) for a\n given content generation is updated; and gets reset when the generation\n number changes.\n\nOf these two integers, only the generation is used when working with versioned\ndata. Both generation and metageneration can be used with concurrency control.\n\nTo learn more about versioning and concurrency, see the following documentation:\n\n- `Overview of Object Versioning\n `_\n- `Guide for using Object Versioning\n `_\n- The `reference page for the gsutil versioning command\n `_\n- `Request preconditions\n `_"
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"help"
],
"positionals": [],
"release": "GA",
"sections": {}
},
"hmac": {
"capsule": "CRUD operations on service account HMAC keys.",
"commands": {
"create": {
"capsule": "CRUD operations on service account HMAC keys.",
"commands": {},
"flags": {
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " Specify the ID or number of the project in which\n to create a key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"hmac",
"create"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``hmac create`` command creates an HMAC key for the specified service\naccount:\n\n gsutil hmac create test.service.account@test_project.iam.gserviceaccount.com\n\nThe secret key material is only available upon creation, so be sure to store\nthe returned secret along with the access_id."
}
},
"delete": {
"capsule": "CRUD operations on service account HMAC keys.",
"commands": {},
"flags": {
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " Specify the ID or number of the project from which to\n delete a key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"hmac",
"delete"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``hmac delete`` command permanently deletes the specified HMAC key:\n\n gsutil hmac delete GOOG56JBMFZX6PMPTQ62VD2\n\nNote that keys must be updated to be in the ``INACTIVE`` state before they can be\ndeleted."
}
},
"get": {
"capsule": "CRUD operations on service account HMAC keys.",
"commands": {},
"flags": {
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " Specify the ID or number of the project from which to\n get a key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"hmac",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``hmac get`` command retrieves the specified HMAC key's metadata:\n\n gsutil hmac get GOOG56JBMFZX6PMPTQ62VD2\n\nNote that there is no option to retrieve a key's secret material after it has\nbeen created."
}
},
"list": {
"capsule": "CRUD operations on service account HMAC keys.",
"commands": {},
"flags": {
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Show all keys, including recently deleted\n keys.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-l": {
"attr": {},
"category": "",
"default": "",
"description": "Use long listing format. Shows each key's full\n metadata excluding the secret.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-l",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " Specify the ID or number of the project from\n which to list keys.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": " Filter keys for a single service account.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"hmac",
"list"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``hmac list`` command lists the HMAC key metadata for keys in the\nspecified project. If no project is specified in the command, the default\nproject is used."
}
},
"update": {
"capsule": "CRUD operations on service account HMAC keys.",
"commands": {},
"flags": {
"-e": {
"attr": {},
"category": "",
"default": "",
"description": " If provided, the update will only be performed\n if the specified etag matches the etag of the\n stored key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " Specify the ID or number of the project in\n which to update a key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": " Sets the state of the specified key to either\n ``ACTIVE`` or ``INACTIVE``.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"hmac",
"update"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``hmac update`` command sets the state of the specified key:\n\n gsutil hmac update -s INACTIVE -e M42da= GOOG56JBMFZX6PMPTQ62VD2\n\nValid state arguments are ``ACTIVE`` and ``INACTIVE``. To set a key to state\n``DELETED``, use the ``hmac delete`` command on an ``INACTIVE`` key. If an etag\nis set in the command, it will only succeed if the provided etag matches the etag\nof the stored key."
}
}
},
"flags": {
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Show all keys, including recently deleted\n keys.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-e": {
"attr": {},
"category": "",
"default": "",
"description": " If provided, the update will only be performed\n if the specified etag matches the etag of the\n stored key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-l": {
"attr": {},
"category": "",
"default": "",
"description": "Use long listing format. Shows each key's full\n metadata excluding the secret.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-l",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " Specify the ID or number of the project in\n which to update a key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": " Sets the state of the specified key to either\n ``ACTIVE`` or ``INACTIVE``.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": " Filter keys for a single service account.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"hmac"
],
"positionals": [],
"release": "GA",
"sections": {
"CREATE": "The ``hmac create`` command creates an HMAC key for the specified service\naccount:\n\n gsutil hmac create test.service.account@test_project.iam.gserviceaccount.com\n\nThe secret key material is only available upon creation, so be sure to store\nthe returned secret along with the access_id.",
"DELETE": "The ``hmac delete`` command permanently deletes the specified HMAC key:\n\n gsutil hmac delete GOOG56JBMFZX6PMPTQ62VD2\n\nNote that keys must be updated to be in the ``INACTIVE`` state before they can be\ndeleted.",
"DESCRIPTION": "You can use the ``hmac`` command to interact with service account `HMAC keys\n`_.\n\nThe ``hmac`` command has five sub-commands:",
"GET": "The ``hmac get`` command retrieves the specified HMAC key's metadata:\n\n gsutil hmac get GOOG56JBMFZX6PMPTQ62VD2\n\nNote that there is no option to retrieve a key's secret material after it has\nbeen created.",
"LIST": "The ``hmac list`` command lists the HMAC key metadata for keys in the\nspecified project. If no project is specified in the command, the default\nproject is used.",
"UPDATE": "The ``hmac update`` command sets the state of the specified key:\n\n gsutil hmac update -s INACTIVE -e M42da= GOOG56JBMFZX6PMPTQ62VD2\n\nValid state arguments are ``ACTIVE`` and ``INACTIVE``. To set a key to state\n``DELETED``, use the ``hmac delete`` command on an ``INACTIVE`` key. If an etag\nis set in the command, it will only succeed if the provided etag matches the etag\nof the stored key."
}
},
"iam": {
"capsule": "Get, set, or change bucket and/or object IAM permissions.",
"commands": {
"ch": {
"capsule": "Get, set, or change bucket and/or object IAM permissions.",
"commands": {},
"flags": {
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Removes roles granted to the specified principal.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "The default gsutil error-handling mode is fail-fast. This flag\nanges the request to fail-silent mode. This is implicitly\nt when you invoke the gsutil ``-m`` option.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "Performs ``iam ch`` recursively to all objects under the\necified bucket.\nis flag can only be set if the policy exclusively uses\nroles/storage.legacyObjectReader`` or ``roles/storage.legacyObjectOwner``.\nis flag cannot be used if the bucket is configured\nr uniform bucket-level access.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"iam",
"ch"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``iam ch`` command incrementally updates Cloud IAM policies. You can specify\nmultiple access grants or removals in a single command. The access changes are\napplied as a batch to each url in the order in which they appear in the command\nline arguments. Each access change specifies a principal and a role that\nis either granted or revoked.\n\nYou can use gsutil ``-m`` to handle object-level operations in parallel.\n\nNOTE: The ``iam ch`` command cannot be used to change the Cloud IAM policy of a\nresource that contains conditions in its policy bindings. Attempts to do so\nresult in an error. To change the Cloud IAM policy of such a resource, you can\nperform a read-modify-write operation by saving the policy to a file using\n``iam get``, editing the file, and setting the updated policy using\n``iam set``.",
"EXAMPLES": "Examples for the ``ch`` sub-command:\n\nTo grant a single role to a single principal for some targets:\n\n gsutil iam ch user:john.doe@example.com:objectCreator gs://ex-bucket\n\nTo make a bucket's objects publicly readable:\n\n gsutil iam ch allUsers:objectViewer gs://ex-bucket\n\nTo grant multiple bindings to a bucket:\n\n gsutil iam ch user:john.doe@example.com:objectCreator \\\n domain:www.my-domain.org:objectViewer gs://ex-bucket\n\nTo specify more than one role for a particular principal:\n\n gsutil iam ch user:john.doe@example.com:objectCreator,objectViewer \\\n gs://ex-bucket\n\nTo specify a custom role for a particular principal:\n\n gsutil iam ch user:john.doe@example.com:roles/customRoleName gs://ex-bucket\n\nTo apply a grant and simultaneously remove a binding to a bucket:\n\n gsutil iam ch -d group:readers@example.com:legacyBucketReader \\\n group:viewers@example.com:objectViewer gs://ex-bucket\n\nTo remove a user from all roles on a bucket:\n\n gsutil iam ch -d user:john.doe@example.com gs://ex-bucket"
}
},
"get": {
"capsule": "Get, set, or change bucket and/or object IAM permissions.",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"iam",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``iam get`` command gets the Cloud IAM policy for a bucket or object, which you\ncan save and edit for use with the ``iam set`` command.\n\nThe following examples save the bucket or object's Cloud IAM policy to a text file:\n\n gsutil iam get gs://example > bucket_iam.txt\n gsutil iam get gs://example/important.txt > object_iam.txt\n\nThe Cloud IAM policy returned by ``iam get`` includes an etag. The etag is used in the\nprecondition check for ``iam set`` unless you override it using\n``iam set -e``."
}
},
"set": {
"capsule": "Get, set, or change bucket and/or object IAM permissions.",
"commands": {},
"flags": {
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Performs ``iam set`` on all object versions.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-e": {
"attr": {},
"category": "",
"default": "",
"description": " Performs the precondition check on each object with the\necified etag before setting the policy. You can retrieve the policy's\nag using ``iam get``.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "The default gsutil error-handling mode is fail-fast. This flag\nanges the request to fail-silent mode. This option is implicitly\nt when you use the gsutil ``-m`` option.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "Performs ``iam set`` recursively on all objects under the\necified bucket.\nis flag can only be set if the policy exclusively uses\nroles/storage.legacyObjectReader`` or ``roles/storage.legacyObjectOwner``.\nis flag cannot be used if the bucket is configured\nr uniform bucket-level access.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"iam",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``iam set`` command sets a Cloud IAM policy on one or more buckets or objects,\nreplacing the existing policy on those buckets or objects. For an example of the correct\nformatting for a Cloud IAM policy, see the output of the ``iam get`` command.\n\nYou can use the ``iam ch`` command to edit an existing policy, even in the\npresence of concurrent updates. You can also edit the policy concurrently using\nthe ``-e`` flag to override the Cloud IAM policy's etag. Specifying ``-e`` with an\nempty string (i.e. ``gsutil iam set -e '' ...``) instructs gsutil to skip the precondition\ncheck when setting the Cloud IAM policy.\n\nWhen you set a Cloud IAM policy on a large number of objects, you should use the\ngsutil ``-m`` option for concurrent processing. The following command\napplies ``iam.txt`` to all objects in the ``dogs`` bucket:\n\n gsutil -m iam set -r iam.txt gs://dogs\n\nNote that only object-level operations are parallelized; setting a Cloud IAM policy\non a large number of buckets with the ``-m`` flag does not improve performance."
}
}
},
"flags": {
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Performs ``iam set`` on all object versions.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Removes roles granted to the specified principal.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-e": {
"attr": {},
"category": "",
"default": "",
"description": " Performs the precondition check on each object with the\necified etag before setting the policy. You can retrieve the policy's\nag using ``iam get``.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "The default gsutil error-handling mode is fail-fast. This flag\nanges the request to fail-silent mode. This is implicitly\nt when you invoke the gsutil ``-m`` option.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "Performs ``iam ch`` recursively to all objects under the\necified bucket.\nis flag can only be set if the policy exclusively uses\nroles/storage.legacyObjectReader`` or ``roles/storage.legacyObjectOwner``.\nis flag cannot be used if the bucket is configured\nr uniform bucket-level access.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"iam"
],
"positionals": [],
"release": "GA",
"sections": {
"CH": "The ``iam ch`` command incrementally updates Cloud IAM policies. You can specify\nmultiple access grants or removals in a single command. The access changes are\napplied as a batch to each url in the order in which they appear in the command\nline arguments. Each access change specifies a principal and a role that\nis either granted or revoked.\n\nYou can use gsutil ``-m`` to handle object-level operations in parallel.\n\nNOTE: The ``iam ch`` command cannot be used to change the Cloud IAM policy of a\nresource that contains conditions in its policy bindings. Attempts to do so\nresult in an error. To change the Cloud IAM policy of such a resource, you can\nperform a read-modify-write operation by saving the policy to a file using\n``iam get``, editing the file, and setting the updated policy using\n``iam set``.",
"DESCRIPTION": "Cloud Identity and Access Management (Cloud IAM) allows you to control who has\naccess to the resources in your Google Cloud project. For more information,\nsee `Cloud Identity and Access Management\n`_.\n\nThe iam command has three sub-commands:",
"EXAMPLES": "Examples for the ``ch`` sub-command:\n\nTo grant a single role to a single principal for some targets:\n\n gsutil iam ch user:john.doe@example.com:objectCreator gs://ex-bucket\n\nTo make a bucket's objects publicly readable:\n\n gsutil iam ch allUsers:objectViewer gs://ex-bucket\n\nTo grant multiple bindings to a bucket:\n\n gsutil iam ch user:john.doe@example.com:objectCreator \\\n domain:www.my-domain.org:objectViewer gs://ex-bucket\n\nTo specify more than one role for a particular principal:\n\n gsutil iam ch user:john.doe@example.com:objectCreator,objectViewer \\\n gs://ex-bucket\n\nTo specify a custom role for a particular principal:\n\n gsutil iam ch user:john.doe@example.com:roles/customRoleName gs://ex-bucket\n\nTo apply a grant and simultaneously remove a binding to a bucket:\n\n gsutil iam ch -d group:readers@example.com:legacyBucketReader \\\n group:viewers@example.com:objectViewer gs://ex-bucket\n\nTo remove a user from all roles on a bucket:\n\n gsutil iam ch -d user:john.doe@example.com gs://ex-bucket",
"GET": "The ``iam get`` command gets the Cloud IAM policy for a bucket or object, which you\ncan save and edit for use with the ``iam set`` command.\n\nThe following examples save the bucket or object's Cloud IAM policy to a text file:\n\n gsutil iam get gs://example > bucket_iam.txt\n gsutil iam get gs://example/important.txt > object_iam.txt\n\nThe Cloud IAM policy returned by ``iam get`` includes an etag. The etag is used in the\nprecondition check for ``iam set`` unless you override it using\n``iam set -e``.",
"SET": "The ``iam set`` command sets a Cloud IAM policy on one or more buckets or objects,\nreplacing the existing policy on those buckets or objects. For an example of the correct\nformatting for a Cloud IAM policy, see the output of the ``iam get`` command.\n\nYou can use the ``iam ch`` command to edit an existing policy, even in the\npresence of concurrent updates. You can also edit the policy concurrently using\nthe ``-e`` flag to override the Cloud IAM policy's etag. Specifying ``-e`` with an\nempty string (i.e. ``gsutil iam set -e '' ...``) instructs gsutil to skip the precondition\ncheck when setting the Cloud IAM policy.\n\nWhen you set a Cloud IAM policy on a large number of objects, you should use the\ngsutil ``-m`` option for concurrent processing. The following command\napplies ``iam.txt`` to all objects in the ``dogs`` bucket:\n\n gsutil -m iam set -r iam.txt gs://dogs\n\nNote that only object-level operations are parallelized; setting a Cloud IAM policy\non a large number of buckets with the ``-m`` flag does not improve performance."
}
},
"kms": {
"capsule": "Configure Cloud KMS encryption",
"commands": {
"authorize": {
"capsule": "Configure Cloud KMS encryption",
"commands": {},
"flags": {
"-k": {
"attr": {},
"category": "",
"default": "",
"description": " The path to the KMS key to use. The path has\nthe following form:\n``projects/[project-id]/locations/[location]/keyRings/[key-ring]/cryptoKeys/[my-key]``",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-k",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " The ID or number of the project being authorized to use the Cloud\nKMS key. If this flag is not included, your\ndefault project is authorized.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"kms",
"authorize"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The authorize sub-command checks that the default (or supplied) project has a\nCloud Storage service agent created for it, and if not, it creates one. It then\nadds appropriate encrypt/decrypt permissions to Cloud KMS resources such that the\nservice agent can write and read Cloud KMS-encrypted objects in buckets associated\nwith the service agent's project.",
"EXAMPLES": "Authorize \"my-project\" to use a Cloud KMS key:\n\n gsutil kms authorize -p my-project \\\n -k projects/key-project/locations/us-east1/keyRings/key-ring/cryptoKeys/my-key"
}
},
"encryption": {
"capsule": "Configure Cloud KMS encryption",
"commands": {},
"flags": {
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Clear the default KMS key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-k": {
"attr": {},
"category": "",
"default": "",
"description": " Set the default KMS key for my-bucket using the\nfull path to the key, which has the following\nform:\n``projects/[project-id]/locations/[location]/keyRings/[key-ring]/cryptoKeys/[my-key]``",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-k",
"nargs": "0",
"type": "bool",
"value": ""
},
"-w": {
"attr": {},
"category": "",
"default": "",
"description": "(used with -k key) Display a warning rather than\nfailing if gsutil is unable to verify that\nthe specified key contains the correct IAM bindings\nfor encryption/decryption. This is useful for\nusers that do not have getIamPolicy permission\nbut know that the key has the correct IAM policy\nfor encryption in the user's project.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-w",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"kms",
"encryption"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The encryption sub-command is used to set, display, or clear a bucket's\ndefault KMS key, which is used to encrypt newly-written objects if no other\nkey is specified.",
"EXAMPLES": "Set the default KMS key for my-bucket:\n\n gsutil kms encryption \\\n -k projects/key-project/locations/us-east1/keyRings/key-ring/cryptoKeys/my-key \\\n gs://my-bucket\n\nShow the default KMS key for my-bucket, if one is set:\n\n gsutil kms encryption gs://my-bucket\n\nClear the default KMS key so newly-written objects are not encrypted using it:\n\n gsutil kms encryption -d gs://my-bucket\n\nOnce you clear the default KMS key, newly-written objects are encrypted with\nGoogle-managed encryption keys by default."
}
},
"serviceaccount": {
"capsule": "Configure Cloud KMS encryption",
"commands": {},
"flags": {
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " The ID or number of the project whose Cloud Storage service\nagent is being requested. If this flag is not\nincluded, your default project is used.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"kms",
"serviceaccount"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The serviceaccount sub-command displays the Cloud Storage service agent\nthat is used to perform Cloud KMS operations against your default project\n(or a supplied project).",
"EXAMPLES": "Show the service account for my-project:\n\n gsutil kms serviceaccount -p my-project"
}
}
},
"flags": {
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Clear the default KMS key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-k": {
"attr": {},
"category": "",
"default": "",
"description": " Set the default KMS key for my-bucket using the\nfull path to the key, which has the following\nform:\n``projects/[project-id]/locations/[location]/keyRings/[key-ring]/cryptoKeys/[my-key]``",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-k",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": " The ID or number of the project whose Cloud Storage service\nagent is being requested. If this flag is not\nincluded, your default project is used.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-w": {
"attr": {},
"category": "",
"default": "",
"description": "(used with -k key) Display a warning rather than\nfailing if gsutil is unable to verify that\nthe specified key contains the correct IAM bindings\nfor encryption/decryption. This is useful for\nusers that do not have getIamPolicy permission\nbut know that the key has the correct IAM policy\nfor encryption in the user's project.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-w",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"kms"
],
"positionals": [],
"release": "GA",
"sections": {
"AUTHORIZE": "The authorize sub-command checks that the default (or supplied) project has a\nCloud Storage service agent created for it, and if not, it creates one. It then\nadds appropriate encrypt/decrypt permissions to Cloud KMS resources such that the\nservice agent can write and read Cloud KMS-encrypted objects in buckets associated\nwith the service agent's project.",
"DESCRIPTION": "The kms command is used to configure Google Cloud Storage and Cloud KMS\nresources to support encryption of Cloud Storage objects with\n`Cloud KMS keys\n`_.\n\nThe kms command has three sub-commands that deal with configuring Cloud\nStorage's integration with Cloud KMS: ``authorize``, ``encryption``,\nand ``serviceaccount``.\n\nBefore using this command, read the `prerequisites\n`_.\nfor using Cloud KMS with Cloud Storage.",
"ENCRYPTION": "The encryption sub-command is used to set, display, or clear a bucket's\ndefault KMS key, which is used to encrypt newly-written objects if no other\nkey is specified.",
"EXAMPLES": "Show the service account for my-project:\n\n gsutil kms serviceaccount -p my-project",
"SERVICEACCOUNT": "The serviceaccount sub-command displays the Cloud Storage service agent\nthat is used to perform Cloud KMS operations against your default project\n(or a supplied project)."
}
},
"label": {
"capsule": "Get, set, or change the label configuration of a bucket.",
"commands": {
"ch": {
"capsule": "Get, set, or change the label configuration of a bucket.",
"commands": {},
"flags": {
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Remove the label with the specified key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-l": {
"attr": {},
"category": "",
"default": "",
"description": "Add or update a label with the specified key and value.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-l",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"label",
"ch"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"label ch\" command updates a bucket's label configuration, applying the\nlabel changes specified by the -l and -d flags. You can specify multiple\nlabel changes in a single command run; all changes will be made atomically to\neach bucket.",
"EXAMPLES": "Examples for \"ch\" sub-command:\n\nAdd the label \"key-foo:value-bar\" to the bucket \"example-bucket\":\n\n gsutil label ch -l key-foo:value-bar gs://example-bucket\n\nChange the above label to have a new value:\n\n gsutil label ch -l key-foo:other-value gs://example-bucket\n\nAdd a new label and delete the old one from above:\n\n gsutil label ch -l new-key:new-value -d key-foo gs://example-bucket"
}
},
"get": {
"capsule": "Get, set, or change the label configuration of a bucket.",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"label",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"label get\" command gets the `labels\n`_\napplied to a bucket, which you can save and edit for use with the \"label set\"\ncommand."
}
},
"set": {
"capsule": "Get, set, or change the label configuration of a bucket.",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"label",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"label set\" command allows you to set the labels on one or more\nbuckets. You can retrieve a bucket's labels using the \"label get\" command,\nsave the output to a file, edit the file, and then use the \"label set\"\ncommand to apply those labels to the specified bucket(s). For\nexample:\n\n gsutil label get gs://bucket > labels.json\n\nMake changes to labels.json, such as adding an additional label, then:\n\n gsutil label set labels.json gs://example-bucket\n\nNote that you can set these labels on multiple buckets at once:\n\n gsutil label set labels.json gs://bucket-foo gs://bucket-bar"
}
}
},
"flags": {
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Remove the label with the specified key.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-l": {
"attr": {},
"category": "",
"default": "",
"description": "Add or update a label with the specified key and value.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-l",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"label"
],
"positionals": [],
"release": "GA",
"sections": {
"CH": "The \"label ch\" command updates a bucket's label configuration, applying the\nlabel changes specified by the -l and -d flags. You can specify multiple\nlabel changes in a single command run; all changes will be made atomically to\neach bucket.",
"DESCRIPTION": "Gets, sets, or changes the label configuration (also called the tagging\nconfiguration by other storage providers) of one or more buckets. An example\nlabel JSON document looks like the following:\n\n {\n \"your_label_key\": \"your_label_value\",\n \"your_other_label_key\": \"your_other_label_value\"\n }\n\nThe label command has three sub-commands:",
"EXAMPLES": "Examples for \"ch\" sub-command:\n\nAdd the label \"key-foo:value-bar\" to the bucket \"example-bucket\":\n\n gsutil label ch -l key-foo:value-bar gs://example-bucket\n\nChange the above label to have a new value:\n\n gsutil label ch -l key-foo:other-value gs://example-bucket\n\nAdd a new label and delete the old one from above:\n\n gsutil label ch -l new-key:new-value -d key-foo gs://example-bucket",
"GET": "The \"label get\" command gets the `labels\n`_\napplied to a bucket, which you can save and edit for use with the \"label set\"\ncommand.",
"SET": "The \"label set\" command allows you to set the labels on one or more\nbuckets. You can retrieve a bucket's labels using the \"label get\" command,\nsave the output to a file, edit the file, and then use the \"label set\"\ncommand to apply those labels to the specified bucket(s). For\nexample:\n\n gsutil label get gs://bucket > labels.json\n\nMake changes to labels.json, such as adding an additional label, then:\n\n gsutil label set labels.json gs://example-bucket\n\nNote that you can set these labels on multiple buckets at once:\n\n gsutil label set labels.json gs://bucket-foo gs://bucket-bar"
}
},
"lifecycle": {
"capsule": "Get or set lifecycle configuration for a bucket",
"commands": {
"get": {
"capsule": "Get or set lifecycle configuration for a bucket",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"lifecycle",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Gets the lifecycle management configuration for a given bucket. You can get the\nlifecycle management configuration for only one bucket at a time. To update the\nconfiguration, you can redirect the output of the ``get`` command into a file,\nedit the file, and then set it on the bucket using the ``set`` sub-command."
}
},
"set": {
"capsule": "Get or set lifecycle configuration for a bucket",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"lifecycle",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Sets the lifecycle management configuration on one or more buckets. The ``config-json-file``\nspecified on the command line should be a path to a local file containing\nthe lifecycle configuration JSON document."
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"lifecycle"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "You can use the ``lifecycle`` command to get or set lifecycle management policies\nfor a given bucket. This command is supported for buckets only, not\nobjects. For more information, see `Object Lifecycle Management\n`_.\n\nThe ``lifecycle`` command has two sub-commands:",
"EXAMPLES": "The following lifecycle management configuration JSON document specifies that all objects\nin this bucket that are more than 365 days old are deleted automatically:\n\n {\n \"rule\":\n [\n {\n \"action\": {\"type\": \"Delete\"},\n \"condition\": {\"age\": 365}\n }\n ]\n }\n\nThe following empty lifecycle management configuration JSON document removes all\nlifecycle configuration for a bucket:\n\n {}",
"GET": "Gets the lifecycle management configuration for a given bucket. You can get the\nlifecycle management configuration for only one bucket at a time. To update the\nconfiguration, you can redirect the output of the ``get`` command into a file,\nedit the file, and then set it on the bucket using the ``set`` sub-command.",
"SET": "Sets the lifecycle management configuration on one or more buckets. The ``config-json-file``\nspecified on the command line should be a path to a local file containing\nthe lifecycle configuration JSON document."
}
},
"logging": {
"capsule": "Configure or retrieve logging on buckets",
"commands": {
"get": {
"capsule": "Configure or retrieve logging on buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"logging",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "If logging is enabled for the specified bucket url, the server responds\nwith a JSON document that looks something like this:\n\n {\n \"logBucket\": \"my_logging_bucket\",\n \"logObjectPrefix\": \"UsageLog\"\n }\n\nYou can download log data from your log bucket using the gsutil cp command."
}
},
"set": {
"capsule": "Configure or retrieve logging on buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"logging",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``set`` sub-command has two sub-commands:",
"OFF": "This command disables usage logging of the buckets named by the specified\nURLs. All URLs must name Cloud Storage buckets (e.g., ``gs://bucket``).\n\nNo logging data is removed from the log buckets when you disable logging,\nbut Google Cloud Storage stops delivering new logs once you have run this\ncommand.",
"ON": "The ``gsutil logging set on`` command enables usage logging of the buckets\nnamed by the specified URLs, outputting log files to the bucket specified\nwith the ``-b`` flag. Cloud Storage doesn't validate the existence of the\noutput bucket, so users should ensure it already exists, and all URLs must\nname Cloud Storage buckets (e.g., ``gs://bucket``). The optional ``-o``\nflag specifies the prefix for log object names. The default prefix is the\nbucket name. For example, the command:\n\n gsutil logging set on -b gs://my_logging_bucket -o UsageLog \\\n gs://my_bucket1 gs://my_bucket2\n\ncauses all read and write activity to objects in ``gs://mybucket1`` and\n``gs://mybucket2`` to be logged to objects prefixed with the name\n``UsageLog``, with those log objects written to the bucket\n``gs://my_logging_bucket``.\n\nIn addition to enabling logging on your bucket(s), you also need to grant\ncloud-storage-analytics@google.com write access to the log bucket, using this\ncommand:\n\n gsutil acl ch -g cloud-storage-analytics@google.com:W gs://my_logging_bucket\n\nNote that log data may contain sensitive information, so you should make\nsure to set an appropriate default bucket ACL to protect that data. (See\n\"gsutil help defacl\".)"
}
}
},
"flags": {
"-b": {
"attr": {},
"category": "",
"default": "",
"description": "bucket_name Specifies the bucket that stores the generated logs. This\n flag is only available for the ``set on`` command and is\n required for that command.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-b",
"nargs": "0",
"type": "bool",
"value": ""
},
"-o": {
"attr": {},
"category": "",
"default": "",
"description": "log_prefix Specifies a common prefix for the names of generated\n logs. This flag is only available for the ``set on``\n command and is optional for that command.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-o",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"logging"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Google Cloud Storage offers `usage logs and storage logs\n`_ in the form of CSV\nfiles that you can download and view. Usage logs provide information for all\nof the requests made on a specified bucket and are created hourly. Storage\nlogs provide information about the storage consumption of that bucket for\nthe last day and are created daily.\n\nOnce set up, usage logs and storage logs are automatically created as new\nobjects in a bucket that you specify. Usage logs and storage logs are\nsubject to the same pricing as other objects stored in Cloud Storage.\n\nFor a complete list of usage log fields and storage data fields, see\n`Usage and storage log format\n`_.\n\nThe logging command has two sub-commands:",
"GET": "If logging is enabled for the specified bucket url, the server responds\nwith a JSON document that looks something like this:\n\n {\n \"logBucket\": \"my_logging_bucket\",\n \"logObjectPrefix\": \"UsageLog\"\n }\n\nYou can download log data from your log bucket using the gsutil cp command.",
"OFF": "This command disables usage logging of the buckets named by the specified\nURLs. All URLs must name Cloud Storage buckets (e.g., ``gs://bucket``).\n\nNo logging data is removed from the log buckets when you disable logging,\nbut Google Cloud Storage stops delivering new logs once you have run this\ncommand.",
"ON": "The ``gsutil logging set on`` command enables usage logging of the buckets\nnamed by the specified URLs, outputting log files to the bucket specified\nwith the ``-b`` flag. Cloud Storage doesn't validate the existence of the\noutput bucket, so users should ensure it already exists, and all URLs must\nname Cloud Storage buckets (e.g., ``gs://bucket``). The optional ``-o``\nflag specifies the prefix for log object names. The default prefix is the\nbucket name. For example, the command:\n\n gsutil logging set on -b gs://my_logging_bucket -o UsageLog \\\n gs://my_bucket1 gs://my_bucket2\n\ncauses all read and write activity to objects in ``gs://mybucket1`` and\n``gs://mybucket2`` to be logged to objects prefixed with the name\n``UsageLog``, with those log objects written to the bucket\n``gs://my_logging_bucket``.\n\nIn addition to enabling logging on your bucket(s), you also need to grant\ncloud-storage-analytics@google.com write access to the log bucket, using this\ncommand:\n\n gsutil acl ch -g cloud-storage-analytics@google.com:W gs://my_logging_bucket\n\nNote that log data may contain sensitive information, so you should make\nsure to set an appropriate default bucket ACL to protect that data. (See\n\"gsutil help defacl\".)",
"SET": "The ``set`` sub-command has two sub-commands:"
}
},
"ls": {
"capsule": "List providers, buckets, or objects",
"commands": {},
"flags": {
"-L": {
"attr": {},
"category": "",
"default": "",
"description": "Prints even more detail than -l.\nte: If you use this option with the (non-default) XML API it\nnerates an additional request per object being listed, which\nkes the -L option run much more slowly and cost more than the\nfault JSON API.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-L",
"nargs": "0",
"type": "bool",
"value": ""
},
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Includes non-current object versions / generations in the listing\nnly useful with a versioning-enabled bucket). If combined with\n option also prints metageneration for each listed object.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-b": {
"attr": {},
"category": "",
"default": "",
"description": "Prints info about the bucket when used with a bucket URL.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-b",
"nargs": "0",
"type": "bool",
"value": ""
},
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "List matching subdirectory names instead of contents, and do not\ncurse into matching subdirectories even if the -R option is\necified.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-e": {
"attr": {},
"category": "",
"default": "",
"description": "Include ETag in long listing (-l) output.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-h": {
"attr": {},
"category": "",
"default": "",
"description": "When used with -l, prints object sizes in human readable format\n.g., 1 KiB, 234 MiB, 2 GiB, etc.)",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-h",
"nargs": "0",
"type": "bool",
"value": ""
},
"-l": {
"attr": {},
"category": "",
"default": "",
"description": "Prints long listing (owner, length).",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-l",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "proj_id Specifies the project ID or project number to use for listing\nckets.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "Requests a recursive listing, performing at least one listing\neration per subdirectory. If you have a large number of\nbdirectories and do not require recursive-style output ordering,\nu may be able to instead use wildcards to perform a flat\nsting, e.g. ``gsutil ls gs://mybucket/**``, which generally\nrforms fewer listing operations.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"ls"
],
"positionals": [],
"release": "GA",
"sections": {
"BUCKET DETAILS": "If you want to see information about the bucket itself, use the -b\noption. For example:\n\n gsutil ls -L -b gs://bucket\n\nprints something like:\n\n gs://bucket/ :\n Storage class: STANDARD\n Location constraint: US\n Versioning enabled: False\n Logging configuration: None\n Website configuration: None\n CORS configuration: None\n Lifecycle configuration: None\n Requester Pays enabled: True\n Labels: None\n Default KMS key: None\n Time created: Thu, 14 Jan 2016 19:25:17 GMT\n Time updated: Thu, 08 Jun 2017 21:17:59 GMT\n Metageneration: 1\n Bucket Policy Only enabled: False\n ACL:\n [\n {\n \"entity\": \"project-owners-867489160491\",\n \"projectTeam\": {\n \"projectNumber\": \"867489160491\",\n \"team\": \"owners\"\n },\n \"role\": \"OWNER\"\n }\n ]\n Default ACL:\n [\n {\n \"entity\": \"project-owners-867489160491\",\n \"projectTeam\": {\n \"projectNumber\": \"867489160491\",\n \"team\": \"owners\"\n },\n \"role\": \"OWNER\"\n }\n ]\n\nNote that some fields above (time created, time updated, metageneration) are\nnot available with the (non-default) XML API.",
"DESCRIPTION": "",
"OBJECT DETAILS": "If you specify the -l option, gsutil outputs additional information about\neach matching provider, bucket, subdirectory, or object. For example:\n\n gsutil ls -l gs://bucket/*.html gs://bucket/*.txt\n\nprints the object size, creation time stamp, and name of each matching\nobject, along with the total count and sum of sizes of all matching objects:\n\n 2276224 2020-03-02T19:25:17Z gs://bucket/obj1.html\n 3914624 2020-03-02T19:30:27Z gs://bucket/obj2.html\n 131 2020-03-02T19:37:45Z gs://bucket/obj3.txt\n TOTAL: 3 objects, 6190979 bytes (5.9 MiB)\n\nNote that the total listed in parentheses above is in mebibytes (or gibibytes,\ntebibytes, etc.), which corresponds to the unit of billing measurement for\nGoogle Cloud Storage.\n\nYou can get a listing of all the objects in the top-level bucket directory\n(along with the total count and sum of sizes) using a command like:\n\n gsutil ls -l gs://bucket\n\nTo print additional detail about objects and buckets use the gsutil ls -L\noption. For example:\n\n gsutil ls -L gs://bucket/obj1\n\nprints something like:\n\n gs://bucket/obj1:\n Creation time: Fri, 26 May 2017 22:55:44 GMT\n Update time: Tue, 18 Jul 2017 12:31:18 GMT\n Storage class: STANDARD\n Content-Length: 60183\n Content-Type: image/jpeg\n Hash (crc32c): zlUhtg==\n Hash (md5): Bv86IAzFzrD1Z2io/c7yqA==\n ETag: 5ca67960a586723b7344afffc81\n Generation: 1378862725952000\n Metageneration: 1\n ACL: [\n {\n \"entity\": \"project-owners-867484910061\",\n \"projectTeam\": {\n \"projectNumber\": \"867484910061\",\n \"team\": \"owners\"\n },\n \"role\": \"OWNER\"\n },\n {\n \"email\": \"jane@gmail.com\",\n \"entity\": \"user-jane@gmail.com\",\n \"role\": \"OWNER\"\n }\n ]\n TOTAL: 1 objects, 60183 bytes (58.77 KiB)\n\nNote that results may contain additional fields, such as custom metadata or\na storage class update time, if they are applicable to the object.\n\nAlso note that some fields, such as update time, are not available with the\n(non-default) XML API.\n\nSee also \"gsutil help acl\" for getting a more readable version of the ACL.",
"PROVIDERS, BUCKETS, SUBDIRECTORIES, AND OBJECTS": "If you run ``gsutil ls`` without URLs, it lists all of the Google Cloud Storage\nbuckets under your default project ID (or all of the Cloud Storage buckets\nunder the project you specify with the ``-p`` flag):\n\n gsutil ls\n\nIf you specify one or more provider URLs, ``gsutil ls`` lists buckets at each\nlisted provider:\n\n gsutil ls gs://\n\ngsutil currently supports ``gs://`` and ``s3://`` as valid providers\n\nIf you specify bucket URLs, or use `URI wildcards\n`_ to capture a set of\nbuckets, ``gsutil ls`` lists objects at the top level of each bucket, along\nwith the names of each subdirectory. For example:\n\n gsutil ls gs://bucket\n\nmight produce output like:\n\n gs://bucket/obj1.htm\n gs://bucket/obj2.htm\n gs://bucket/images1/\n gs://bucket/images2/\n\nThe \"/\" at the end of the last 2 URLs tells you they are subdirectories,\nwhich you can list using:\n\n gsutil ls gs://bucket/images*\n\nIf you specify object URLs, ``gsutil ls`` lists the specified objects. For\nexample:\n\n gsutil ls gs://bucket/*.txt\n\nlists all files whose name matches the above wildcard at the top level of\nthe bucket.\n\nFor more details, see `URI wildcards\n`_."
}
},
"mb": {
"capsule": "Make buckets",
"commands": {},
"flags": {
"--autoclass": {
"attr": {},
"category": "",
"default": "",
"description": "Enables the Autoclass feature that automatically\n sets object storage classes.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "--autoclass",
"nargs": "0",
"type": "bool",
"value": ""
},
"--pap": {
"attr": {},
"category": "",
"default": "",
"description": "setting Specifies the public access prevention setting. Valid\n values are \"enforced\" or \"inherited\". When\n \"enforced\", objects in this bucket cannot be made\n publicly accessible. Default is \"inherited\".",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "--pap",
"nargs": "0",
"type": "bool",
"value": ""
},
"--placement": {
"attr": {},
"category": "",
"default": "",
"description": "reg1,reg2 Two regions that form the custom dual-region.\n Only regions within the same continent are or will ever\n be valid. Invalid location pairs (such as\n mixed-continent, or with unsupported regions)\n will return an error.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "--placement",
"nargs": "0",
"type": "bool",
"value": ""
},
"--retention": {
"attr": {},
"category": "",
"default": "",
"description": "time Specifies the retention policy. Default is no retention\n policy. This can only be set on gs:// buckets and\n requires using the JSON API. For more details about\n retention policy see \"gsutil help retention\"",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "--retention",
"nargs": "0",
"type": "bool",
"value": ""
},
"--rpo": {
"attr": {},
"category": "",
"default": "",
"description": "setting Specifies the `replication setting `_.\n This flag is not valid for single-region buckets,\n and multi-region buckets only accept a value of\n DEFAULT. Valid values for dual region buckets\n are (ASYNC_TURBO|DEFAULT). If unspecified, DEFAULT is applied\n for dual-region and multi-region buckets.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "--rpo",
"nargs": "0",
"type": "bool",
"value": ""
},
"-b": {
"attr": {},
"category": "",
"default": "",
"description": " Specifies the uniform bucket-level access setting.\n When \"on\", ACLs assigned to objects in the bucket are\n not evaluated. Consequently, only IAM policies grant\n access to objects in these buckets. Default is \"off\".",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-b",
"nargs": "0",
"type": "bool",
"value": ""
},
"-c": {
"attr": {},
"category": "",
"default": "",
"description": "class Specifies the default storage class. Default is\n ``Standard``. See `Available storage classes\n `_\n for a list of possible values.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-c",
"nargs": "0",
"type": "bool",
"value": ""
},
"-k": {
"attr": {},
"category": "",
"default": "",
"description": " Set the default KMS key using the full path to the key,\n which has the following form:\n ``projects/[project-id]/locations/[location]/keyRings/[key-ring]/cryptoKeys/[my-key]``",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-k",
"nargs": "0",
"type": "bool",
"value": ""
},
"-l": {
"attr": {},
"category": "",
"default": "",
"description": "location Can be any supported location. See\n https://cloud.google.com/storage/docs/locations\n for a discussion of this distinction. Default is US.\n Locations are case insensitive.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-l",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "project Specifies the project ID or project number to create\n the bucket under.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": "class Same as -c.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"mb"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Create one or more new buckets. Google Cloud Storage has a single namespace,\nso you are not allowed to create a bucket with a name already in use by\nanother user. You can, however, carve out parts of the bucket name space\ncorresponding to your company's domain name (see \"gsutil help naming\").\n\nIf you don't specify a project ID or project number using the -p option, the\nbuckets are created using the default project ID specified in your `gsutil\nconfiguration file `_.\n\nThe -l option specifies the location for the buckets. Once a bucket is created\nin a given location, it cannot be moved to a different location. Instead, you\nneed to create a new bucket, move the data over, and then delete the original\nbucket.",
"LOCATIONS": "You can specify one of the `available locations\n`_ for a bucket\nwith the -l option.\n\nExamples:\n\n gsutil mb -l asia gs://some-bucket\n\n gsutil mb -c standard -l us-east1 gs://some-bucket\n\nIf you don't specify a -l option, the bucket is created in the default\nlocation (US).",
"STORAGE CLASSES": "You can specify one of the `storage classes\n`_ for a bucket\nwith the -c option.\n\nExample:\n\n gsutil mb -c nearline gs://some-bucket\n\nSee online documentation for\n`pricing `_ and\n`SLA `_ details.\n\nIf you don't specify a -c option, the bucket is created with the\ndefault storage class Standard Storage."
}
},
"mv": {
"capsule": "Move/rename objects",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"mv"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``gsutil mv`` command allows you to move data between your local file\nsystem and the cloud, move data within the cloud, and move data between\ncloud storage providers. For example, to move all objects from a\nbucket to a local directory you could use:\n\n gsutil mv gs://my_bucket/* dir\n\nSimilarly, to move all objects from a local directory to a bucket you could\nuse:\n\n gsutil mv ./dir gs://my_bucket",
"GROUPS OF OBJECTS": "You can use the ``gsutil mv`` command to rename all objects with a given\nprefix to have a new prefix. For example, the following command renames all\nobjects under gs://my_bucket/oldprefix to be under gs://my_bucket/newprefix,\notherwise preserving the naming structure:\n\n gsutil mv gs://my_bucket/oldprefix gs://my_bucket/newprefix\n\nIf you do a rename as specified above and you want to preserve ACLs, you\nshould use the ``-p`` option (see OPTIONS).\n\nIf you have a large number of files to move you might want to use the\n``gsutil -m`` option, to perform a multi-threaded/multi-processing move:\n\n gsutil -m mv gs://my_bucket/oldprefix gs://my_bucket/newprefix",
"OPERATION": "Unlike the case with many file systems, the gsutil mv command does not\nperform a single atomic operation. Rather, it performs a copy from source\nto destination followed by removing the source for each object.\n\nA consequence of this is that, in addition to normal network and operation\ncharges, if you move a Nearline Storage, Coldline Storage, or Archive Storage\nobject, deletion and data retrieval charges apply. See the `documentation\n`_ for pricing details."
}
},
"notification": {
"capsule": "Configure object change notification",
"commands": {
"create": {
"capsule": "Configure object change notification",
"commands": {},
"flags": {
"-e": {
"attr": {},
"category": "",
"default": "",
"description": "Specify an event type filter for this notification config. Cloud\nage only sends notifications of this type. You may specify this\nmeter multiple times to allow multiple event types. If not\nified, Cloud Storage sends notifications for all event types.\nvalid types are:\nJECT_FINALIZE - An object has been created.\nJECT_METADATA_UPDATE - The metadata of an object has changed.\nJECT_DELETE - An object has been permanently deleted.\nJECT_ARCHIVE - A live version of an object has become a\nnoncurrent version.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies the payload format of notification messages. Must be\ner \"json\" for a payload matches the object metadata for the\n API, or \"none\" to specify no payload at all. In either case,\nfication details are available in the message attributes.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-m": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies a key:value attribute that is appended to the set\nttributes sent to Cloud Pub/Sub for all events associated with\n notification config. You may specify this parameter multiple\ns to set multiple attributes.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-m",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies a prefix path filter for this notification config. Cloud\nage only sends notifications for objects in this bucket whose\ns begin with the specified prefix.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": "Skips creation and permission assignment of the Cloud Pub/Sub topic.\n is useful if the caller does not have permission to access\ntopic in question, or if the topic already exists and has the\nopriate publish permission assigned.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
},
"-t": {
"attr": {},
"category": "",
"default": "",
"description": "The Cloud Pub/Sub topic to which notifications should be sent. If\nspecified, this command chooses a topic whose project is your\nult project and whose ID is the same as the Cloud Storage bucket\n.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-t",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"notification",
"create"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The create sub-command creates a notification config on a bucket, establishing\na flow of event notifications from Cloud Storage to a Cloud Pub/Sub topic. As\npart of creating this flow, the create command also verifies that the\ndestination Cloud Pub/Sub topic exists, creating it if necessary, and verifies\nthat the Cloud Storage bucket has permission to publish events to that topic,\ngranting the permission if necessary.\n\nIf a destination Cloud Pub/Sub topic is not specified with the -t flag, Cloud\nStorage chooses a topic name in the default project whose ID is the same as\nthe bucket name. For example, if the default project ID specified is\n'default-project' and the bucket being configured is gs://example-bucket, the\ncreate command uses the Cloud Pub/Sub topic\n\"projects/default-project/topics/example-bucket\".\n\nIn order to enable notifications, your project's `Cloud Storage service agent\n`_ must have\nthe IAM permission \"pubsub.topics.publish\". This command checks to see if the\ndestination Cloud Pub/Sub topic grants the service agent this permission. If\nnot, the create command attempts to grant it.\n\nA bucket can have up to 100 total notification configurations and up to 10\nnotification configurations set to trigger for a specific event.",
"EXAMPLES": "Begin sending notifications of all changes to the bucket example-bucket\nto the Cloud Pub/Sub topic projects/default-project/topics/example-bucket:\n\n gsutil notification create -f json gs://example-bucket\n\nThe same as above, but specifies the destination topic ID 'files-to-process'\nin the default project:\n\n gsutil notification create -f json \\\n -t files-to-process gs://example-bucket\n\nThe same as above, but specifies a Cloud Pub/Sub topic belonging to the\nspecific cloud project 'example-project':\n\n gsutil notification create -f json \\\n -t projects/example-project/topics/files-to-process gs://example-bucket\n\nCreate a notification config that only sends an event when a new object\nhas been created:\n\n gsutil notification create -f json -e OBJECT_FINALIZE gs://example-bucket\n\nCreate a topic and notification config that only sends an event when\nan object beginning with \"photos/\" is affected:\n\n gsutil notification create -p photos/ gs://example-bucket\n\nList all of the notificationConfigs in bucket example-bucket:\n\n gsutil notification list gs://example-bucket\n\nDelete all notitificationConfigs for bucket example-bucket:\n\n gsutil notification delete gs://example-bucket\n\nDelete one specific notificationConfig for bucket example-bucket:\n\n gsutil notification delete \\\n projects/_/buckets/example-bucket/notificationConfigs/1",
"STEPS": "Once the create command has succeeded, Cloud Storage publishes a message to\nthe specified Cloud Pub/Sub topic when eligible changes occur. In order to\nreceive these messages, you must create a Pub/Sub subscription for your\nPub/Sub topic. To learn more about creating Pub/Sub subscriptions, see `the\nPub/Sub Subscriber Overview `_.\n\nYou can create a simple Pub/Sub subscription using the ``gcloud`` command-line\ntool. For example, to create a new subscription on the topic \"myNewTopic\" and\nattempt to pull messages from it, you could run:\n\n gcloud beta pubsub subscriptions create --topic myNewTopic testSubscription\n gcloud beta pubsub subscriptions pull --auto-ack testSubscription"
}
},
"delete": {
"capsule": "Configure object change notification",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"notification",
"delete"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The delete sub-command deletes notification configs from a bucket. If a\nnotification config name is passed as a parameter, that notification config\nalone is deleted. If a bucket name is passed, all notification configs\nassociated with that bucket are deleted.\n\nCloud Pub/Sub topics associated with this notification config are not\ndeleted by this command. Those must be deleted separately, for example with\nthe gcloud command `gcloud beta pubsub topics delete`.\n\nObject Change Notification subscriptions cannot be deleted with this command.\nFor that, see the command `gsutil notification stopchannel`.",
"EXAMPLES": "Delete a single notification config (with ID 3) in the bucket example-bucket:\n\n gsutil notification delete projects/_/buckets/example-bucket/notificationConfigs/3\n\nDelete all notification configs in the bucket example-bucket:\n\n gsutil notification delete gs://example-bucket"
}
},
"list": {
"capsule": "Configure object change notification",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"notification",
"list"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The list sub-command provides a list of notification configs belonging to a\ngiven bucket. The listed name of each notification config can be used with\nthe delete sub-command to delete that specific notification config.\n\nFor listing Object Change Notifications instead of Cloud Pub/Sub notification\nsubscription configs, add a -o flag.",
"EXAMPLES": "Fetch the list of notification configs for the bucket example-bucket:\n\n gsutil notification list gs://example-bucket\n\nThe same as above, but for Object Change Notifications instead of Cloud\nPub/Sub notification subscription configs:\n\n gsutil notification list -o gs://example-bucket\n\nFetch the notification configs in all buckets matching a wildcard:\n\n gsutil notification list gs://example-*\n\nFetch all of the notification configs for buckets in the default project:\n\n gsutil notification list gs://*"
}
}
},
"flags": {
"-e": {
"attr": {},
"category": "",
"default": "",
"description": "Specify an event type filter for this notification config. Cloud\nage only sends notifications of this type. You may specify this\nmeter multiple times to allow multiple event types. If not\nified, Cloud Storage sends notifications for all event types.\nvalid types are:\nJECT_FINALIZE - An object has been created.\nJECT_METADATA_UPDATE - The metadata of an object has changed.\nJECT_DELETE - An object has been permanently deleted.\nJECT_ARCHIVE - A live version of an object has become a\nnoncurrent version.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies the payload format of notification messages. Must be\ner \"json\" for a payload matches the object metadata for the\n API, or \"none\" to specify no payload at all. In either case,\nfication details are available in the message attributes.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-m": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies a key:value attribute that is appended to the set\nttributes sent to Cloud Pub/Sub for all events associated with\n notification config. You may specify this parameter multiple\ns to set multiple attributes.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-m",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies a prefix path filter for this notification config. Cloud\nage only sends notifications for objects in this bucket whose\ns begin with the specified prefix.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": "Skips creation and permission assignment of the Cloud Pub/Sub topic.\n is useful if the caller does not have permission to access\ntopic in question, or if the topic already exists and has the\nopriate publish permission assigned.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
},
"-t": {
"attr": {},
"category": "",
"default": "",
"description": "The Cloud Pub/Sub topic to which notifications should be sent. If\nspecified, this command chooses a topic whose project is your\nult project and whose ID is the same as the Cloud Storage bucket\n.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-t",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"notification"
],
"positionals": [],
"release": "GA",
"sections": {
"AND PARALLEL COMPOSITE UPLOADS": "gsutil supports `parallel composite uploads\n`_.\nIf enabled, an upload can result in multiple temporary component objects\nbeing uploaded before the actual intended object is created. Any subscriber\nto notifications for this bucket then sees a notification for each of these\ncomponents being created and deleted. If this is a concern for you, note\nthat parallel composite uploads can be disabled by setting\n\"parallel_composite_upload_threshold = 0\" in your .boto config file.\nAlternately, your subscriber code can filter out gsutil's parallel\ncomposite uploads by ignoring any notification about objects whose names\ncontain (but do not start with) the following string:\n \"/gsutil/tmp/parallel_composite_uploads/for_details_see/gsutil_help_cp/\".",
"CHANGE NOTIFICATIONS": "Object change notification is a separate, older feature within Cloud Storage\nfor generating notifications. This feature sends HTTPS messages to a client\napplication that you've set up separately. This feature is generally not\nrecommended, because Pub/Sub notifications are cheaper, easier to use, and\nmore flexible. For more information, see\n`Object change notification\n`_.\n\nThe \"watchbucket\" and \"stopchannel\" sub-commands enable and disable Object\nchange notifications.",
"CREATE": "The create sub-command creates a notification config on a bucket, establishing\na flow of event notifications from Cloud Storage to a Cloud Pub/Sub topic. As\npart of creating this flow, the create command also verifies that the\ndestination Cloud Pub/Sub topic exists, creating it if necessary, and verifies\nthat the Cloud Storage bucket has permission to publish events to that topic,\ngranting the permission if necessary.\n\nIf a destination Cloud Pub/Sub topic is not specified with the -t flag, Cloud\nStorage chooses a topic name in the default project whose ID is the same as\nthe bucket name. For example, if the default project ID specified is\n'default-project' and the bucket being configured is gs://example-bucket, the\ncreate command uses the Cloud Pub/Sub topic\n\"projects/default-project/topics/example-bucket\".\n\nIn order to enable notifications, your project's `Cloud Storage service agent\n`_ must have\nthe IAM permission \"pubsub.topics.publish\". This command checks to see if the\ndestination Cloud Pub/Sub topic grants the service agent this permission. If\nnot, the create command attempts to grant it.\n\nA bucket can have up to 100 total notification configurations and up to 10\nnotification configurations set to trigger for a specific event.",
"DELETE": "The delete sub-command deletes notification configs from a bucket. If a\nnotification config name is passed as a parameter, that notification config\nalone is deleted. If a bucket name is passed, all notification configs\nassociated with that bucket are deleted.\n\nCloud Pub/Sub topics associated with this notification config are not\ndeleted by this command. Those must be deleted separately, for example with\nthe gcloud command `gcloud beta pubsub topics delete`.\n\nObject Change Notification subscriptions cannot be deleted with this command.\nFor that, see the command `gsutil notification stopchannel`.",
"DESCRIPTION": "You can use the ``notification`` command to configure\n`Pub/Sub notifications for Cloud Storage\n`_\nand `Object change notification\n`_ channels.",
"EXAMPLES": "Stop the notification event channel with channel identifier channel1 and\nresource identifier SoGqan08XDIFWr1Fv_nGpRJBHh8:\n\n gsutil notification stopchannel channel1 SoGqan08XDIFWr1Fv_nGpRJBHh8",
"LIST": "The list sub-command provides a list of notification configs belonging to a\ngiven bucket. The listed name of each notification config can be used with\nthe delete sub-command to delete that specific notification config.\n\nFor listing Object Change Notifications instead of Cloud Pub/Sub notification\nsubscription configs, add a -o flag.",
"PUB/SUB": "The \"create\", \"list\", and \"delete\" sub-commands deal with configuring Cloud\nStorage integration with Google Cloud Pub/Sub.",
"STEPS": "Once the create command has succeeded, Cloud Storage publishes a message to\nthe specified Cloud Pub/Sub topic when eligible changes occur. In order to\nreceive these messages, you must create a Pub/Sub subscription for your\nPub/Sub topic. To learn more about creating Pub/Sub subscriptions, see `the\nPub/Sub Subscriber Overview `_.\n\nYou can create a simple Pub/Sub subscription using the ``gcloud`` command-line\ntool. For example, to create a new subscription on the topic \"myNewTopic\" and\nattempt to pull messages from it, you could run:\n\n gcloud beta pubsub subscriptions create --topic myNewTopic testSubscription\n gcloud beta pubsub subscriptions pull --auto-ack testSubscription",
"STOPCHANNEL": "The stopchannel sub-command can be used to stop sending change events to a\nnotification channel.\n\nThe channel_id and resource_id parameters should match the values from the\nresponse of a bucket watch request.",
"WATCHBUCKET": "The watchbucket sub-command can be used to watch a bucket for object changes.\nA service account must be used when running this command.\n\nThe app_url parameter must be an HTTPS URL to an application that will be\nnotified of changes to any object in the bucket.\n\nThe optional id parameter can be used to assign a unique identifier to the\ncreated notification channel. If not provided, a random UUID string is\ngenerated.\n\nThe optional token parameter can be used to validate notifications events.\nTo do this, set this custom token and store it to later verify that\nnotification events contain the client token you expect."
}
},
"pap": {
"capsule": "Configure public access prevention",
"commands": {
"get": {
"capsule": "Configure public access prevention",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"pap",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``pap get`` command returns public access prevention\nvalues for the specified Cloud Storage buckets.",
"EXAMPLES": "Check if ``redbucket`` and ``bluebucket`` are using public\naccess prevention:\n\n gsutil pap get gs://redbucket gs://bluebucket"
}
},
"set": {
"capsule": "Configure public access prevention",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"pap",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``pap set`` command configures public access prevention\nfor Cloud Storage buckets. If you set a bucket to be\n``inherited``, it uses public access prevention only if\nthe bucket is subject to the `public access prevention\n`_\norganization policy constraint.",
"EXAMPLES": "Configure ``redbucket`` and ``bluebucket`` to use public\naccess prevention:\n\n gsutil pap set enforced gs://redbucket gs://bluebucket"
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"pap"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``pap`` command is used to retrieve or configure the\n`public access prevention\n`_ setting of\nCloud Storage buckets. This command has two sub-commands: ``get`` and ``set``.",
"EXAMPLES": "Configure ``redbucket`` and ``bluebucket`` to use public\naccess prevention:\n\n gsutil pap set enforced gs://redbucket gs://bluebucket",
"GET": "The ``pap get`` command returns public access prevention\nvalues for the specified Cloud Storage buckets.",
"SET": "The ``pap set`` command configures public access prevention\nfor Cloud Storage buckets. If you set a bucket to be\n``inherited``, it uses public access prevention only if\nthe bucket is subject to the `public access prevention\n`_\norganization policy constraint."
}
},
"perfdiag": {
"capsule": "Run performance diagnostic",
"commands": {},
"flags": {
"-c": {
"attr": {},
"category": "",
"default": "",
"description": "Sets the number of `processes\nttps://en.wikipedia.org/wiki/Process_(computing)>`_ to use\nile running throughput experiments. The default value is 1.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-c",
"nargs": "0",
"type": "bool",
"value": ""
},
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Sets the directory to store temporary local files in. If not\necified, a default temporary directory will be used.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-i": {
"attr": {},
"category": "",
"default": "",
"description": "Reads the JSON output file created using the ``-o`` command and prints\nformatted description of the results.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-i",
"nargs": "0",
"type": "bool",
"value": ""
},
"-j": {
"attr": {},
"category": "",
"default": "",
"description": "Applies gzip transport encoding and sets the target compression\ntio for the generated test files. This ratio can be an integer\ntween 0 and 100 (inclusive), with 0 generating a file with\niform data, and 100 generating random data. When you specify\ne ``-j`` option, files being uploaded are compressed in-memory and\n-the-wire only. See `cp -j\nttps://cloud.google.com/storage/docs/gsutil/commands/cp#options>`_\nr specific semantics.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-j",
"nargs": "0",
"type": "bool",
"value": ""
},
"-k": {
"attr": {},
"category": "",
"default": "",
"description": "Sets the number of `threads\nttps://en.wikipedia.org/wiki/Thread_(computing)>`_ per process\n use while running throughput experiments. Each process will\nceive an equal number of threads. The default value is 1.\nTE: All specified threads and processes will be created, but may\nt by saturated with work if too few objects (specified with ``-n``)\nd too few components (specified with ``-y``) are specified.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-k",
"nargs": "0",
"type": "bool",
"value": ""
},
"-m": {
"attr": {},
"category": "",
"default": "",
"description": "Adds metadata to the result JSON file. Multiple ``-m`` values can be\necified. Example:\n gsutil perfdiag -m \"key1:val1\" -m \"key2:val2\" gs://bucketname\nch metadata key will be added to the top-level \"metadata\"\nctionary in the output JSON file.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-m",
"nargs": "0",
"type": "bool",
"value": ""
},
"-n": {
"attr": {},
"category": "",
"default": "",
"description": "Sets the number of objects to use when downloading and uploading\nles during tests. Defaults to 5.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-n",
"nargs": "0",
"type": "bool",
"value": ""
},
"-o": {
"attr": {},
"category": "",
"default": "",
"description": "Writes the results of the diagnostic to an output file. The output\n a JSON file containing system information and performance\nagnostic results. The file can be read and reported later using\ne ``-i`` option.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-o",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Sets the type of parallelism to be used (only applicable when\nreads or processes are specified and threads * processes > 1). The\nfault is to use ``fan``. Must be one of the following:\nn\n Use one thread per object. This is akin to using gsutil ``-m cp``,\n with sliced object download / parallel composite upload\n disabled.\nice\n Use Y (specified with ``-y``) threads for each object, transferring\n one object at a time. This is akin to using parallel object\n download / parallel composite upload, without ``-m``. Sliced\n uploads not supported for s3.\nth\n Use Y (specified with ``-y``) threads for each object, transferring\n multiple objects at a time. This is akin to simultaneously\n using sliced object download / parallel composite upload and\n ``gsutil -m cp``. Parallel composite uploads not supported for s3.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": "Sets the size (in bytes) for each of the N (set with ``-n``) objects\ned in the read and write throughput tests. The default is 1 MiB.\nis can also be specified using byte suffixes such as 500K or 1M.\nTE: these values are interpreted as multiples of 1024 (K=1024,\n1024*1024, etc.)\nTE: If ``rthru_file`` or ``wthru_file`` are performed, N (set with\n-n``) times as much disk space as specified will be required for\ne operation.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
},
"-t": {
"attr": {},
"category": "",
"default": "",
"description": "Sets the list of diagnostic tests to perform. The default is to\nn the ``lat``, ``rthru``, and ``wthru`` diagnostic tests. Must be a\nmma-separated list containing one or more of the following:\nt\n For N (set with ``-n``) objects, write the object, retrieve its\n metadata, read the object, and finally delete the object.\n Record the latency of each operation.\nst\n Write N (set with ``-n``) objects to the bucket, record how long\n it takes for the eventually consistent listing call to return\n the N objects in its result, delete the N objects, then record\n how long it takes listing to stop returning the N objects.\nhru\n Runs N (set with ``-n``) read operations, with at most C\n (set with -c) reads outstanding at any given time.\nhru_file\n The same as ``rthru``, but simultaneously writes data to the disk,\n to gauge the performance impact of the local disk on downloads.\nhru\n Runs N (set with ``-n``) write operations, with at most C\n (set with ``-c``) writes outstanding at any given time.\nhru_file\n The same as wthru, but simultaneously reads data from the disk,\n to gauge the performance impact of the local disk on uploads.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-t",
"nargs": "0",
"type": "bool",
"value": ""
},
"-y": {
"attr": {},
"category": "",
"default": "",
"description": "Sets the number of slices to divide each file/object into while\nansferring data. Only applicable with the slice (or both)\nrallelism type. The default is 4 slices.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-y",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"perfdiag"
],
"positionals": [],
"release": "GA",
"sections": {
"AVAILABILITY": "The ``perfdiag`` command ignores the boto num_retries configuration parameter.\nInstead, it always retries on HTTP errors in the 500 range and keeps track of\nhow many 500 errors were encountered during the test. The availability\nmeasurement is reported at the end of the test.\n\nNote that HTTP responses are only recorded when the request was made in a\nsingle process. When using multiple processes or threads, read and write\nthroughput measurements are performed in an external process, so the\navailability numbers reported won't include the throughput measurements.",
"DESCRIPTION": "The ``perfdiag`` command runs a suite of diagnostic tests for a given Cloud\nStorage bucket.\n\nThe ``bucket_name`` parameter must name an existing bucket to which the user\nhas write permission. Several test files will be uploaded to and downloaded\nfrom this bucket. All test files will be deleted at the completion of the\ndiagnostic if it finishes successfully. For a list of relevant permissions,\nsee `Cloud IAM permissions for gsutil commands\n`_.\n\ngsutil performance can be influenced by a number of factors originating\nat the client, server, or network level. Some examples include the\nfollowing:\n\n + CPU speed\n + Available memory\n + The access path to the local disk\n + Network bandwidth\n + Contention and error rates along the path between gsutil and Google servers\n + Operating system buffering configuration\n + Firewalls and other network elements\n\nThe `perfdiag` command is provided so that customers can run a known\nmeasurement suite when troubleshooting performance problems.",
"DIAGNOSTIC OUTPUT TO THE CLOUD STORAGE TEAM": "If the Cloud Storage team asks you to run a performance diagnostic\nplease use the following command, and email the output file (output.json)\nto the @google.com address provided by the Cloud Storage team:\n\n gsutil perfdiag -o output.json gs://your-bucket\n\nAdditional resources for discussing ``perfdiag`` results include the\n`Stack Overflow tag for Cloud Storage\n`_ and\nthe `gsutil GitHub repository\n`_.",
"NOTE": "The ``perfdiag`` command runs a series of tests that collects system information,\nsuch as the following:\n\n+ Retrieves requester's IP address.\n+ Executes DNS queries to Google servers and collects the results.\n+ Collects network statistics information from the output of ``netstat -s`` and\n evaluates the BIOS product name string.\n+ If a proxy server is configured, attempts to connect to it to retrieve\n the location and storage class of the bucket being used for performance\n testing.\n\nNone of this information will be sent to Google unless you proactively choose to\nsend it."
}
},
"rb": {
"capsule": "Remove buckets",
"commands": {},
"flags": {
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Continues silently (without printing error messages) despite\nrors when removing buckets. If some buckets couldn't be removed,\nutil's exit status will be non-zero even if this flag is set.\n no buckets could be removed, the command raises a\no matches\" error.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"rb"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "Delete one or more buckets. Buckets must be empty before you can delete them.\n\nBe certain you want to delete a bucket before you do so, as once it is\ndeleted the name becomes available and another user may create a bucket with\nthat name. (But see also \"DOMAIN NAMED BUCKETS\" under \"gsutil help naming\"\nfor help carving out parts of the bucket name space.)"
}
},
"requesterpays": {
"capsule": "Enable or disable requester pays for one or more buckets",
"commands": {
"get": {
"capsule": "Enable or disable requester pays for one or more buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"requesterpays",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"get\" sub-command gets the Requester Pays configuration for a\nbucket and displays whether or not it is enabled."
}
},
"set": {
"capsule": "Enable or disable requester pays for one or more buckets",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"requesterpays",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The \"set\" sub-command requires an additional sub-command, either \"on\" or\n\"off\", which, respectively, will enable or disable Requester Pays for the\nspecified bucket."
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"requesterpays"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The `Requester Pays\n`_ feature enables you\nto configure a Google Cloud Storage bucket so that the requester\npays all costs related to accessing the bucket and its objects.\n\nThe gsutil requesterpays command has two sub-commands:",
"GET": "The \"get\" sub-command gets the Requester Pays configuration for a\nbucket and displays whether or not it is enabled.",
"SET": "The \"set\" sub-command requires an additional sub-command, either \"on\" or\n\"off\", which, respectively, will enable or disable Requester Pays for the\nspecified bucket."
}
},
"retention": {
"capsule": "Provides utilities to interact with Retention Policy feature.",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"retention"
],
"positionals": [],
"release": "GA",
"sections": {
"CLEAR": "The ``gsutil retention clear`` command removes an unlocked retention policy\nfrom one or more buckets. You cannot remove or reduce the duration of a locked\nretention policy.",
"DESCRIPTION": "",
"EVENT": "The ``gsutil retention event`` command enables or disables an event-based\nhold on an object.",
"EVENT-DEFAULT": "The ``gsutil retention event-default`` command sets the default value for an\nevent-based hold on one or more buckets.\n\nBy setting the default event-based hold on a bucket, newly-created objects\ninherit that value as their event-based hold (it is not applied\nretroactively).",
"EXAMPLES": "Setting the temporary hold on an object:\n\n gsutil retention temp set gs://my-bucket/my-object\n\nReleasing the temporary hold on an object:\n\n gsutil retention temp release gs://my-bucket/my-object\n\nYou can also provide a precondition on an object's metageneration in order to\navoid potential race conditions. You can use gsutil's '-h' option to specify\npreconditions. For example, the following specifies a precondition that checks\nan object's metageneration before setting the temporary hold on the object:\n\n gsutil -h \"x-goog-if-metageneration-match: 1\" \\\n retention temp set gs://my-bucket/my-object\n\nIf you want to set or release a temporary hold on a large number of objects, then\nyou might want to use the top-level '-m' option to perform a parallel update.\nFor example, the following command sets a temporary hold on objects ending\nwith .jpg in parallel, in the root folder:\n\n gsutil -m retention temp set gs://bucket/*.jpg",
"FORMATS": "Formats for the ``set`` subcommand include:\n\ns\n Specifies retention period of seconds for objects in this bucket.\n\nd\n Specifies retention period of days for objects in this bucket.\n\nm\n Specifies retention period of months for objects in this bucket.\n\ny\n Specifies retention period of years for objects in this bucket.\n\nGCS JSON API accepts retention periods as number of seconds. Durations provided\nin terms of days, months or years are converted to their rough equivalent\nvalues in seconds, using the following conversions:\n\n- A month is considered to be 31 days or 2,678,400 seconds.\n- A year is considered to be 365.25 days or 31,557,600 seconds.\n\nRetention periods must be greater than 0 and less than 100 years.\nRetention durations must be in only one form (seconds, days, months,\nor years), and not a combination of them.\n\nNote that while it is possible to specify retention durations\nshorter than a day (using seconds), enforcement of such retention periods is not\nguaranteed. Such durations may only be used for testing purposes.",
"GET": "The ``gsutil retention get`` command retrieves the retention policy for a given\nbucket and displays a human-readable representation of the configuration.",
"LOCK": "The ``gsutil retention lock`` command PERMANENTLY locks an unlocked\nretention policy on one or more buckets.\n\nCAUTION: A locked retention policy cannot be removed from a bucket or reduced\nin duration. Once locked, deleting the bucket is the only way to \"remove\" a\nretention policy.",
"SET": "You can configure a data retention policy for a Cloud Storage bucket that\ngoverns how long objects in the bucket must be retained. You can also lock the\ndata retention policy, permanently preventing the policy from being reduced or\nremoved. For more information, see `Retention policies and Bucket Lock\n`_.\n\nThe ``gsutil retention set`` command allows you to set or update the\nretention policy on one or more buckets.\n\nTo remove an unlocked retention policy from one or more\nbuckets, use the ``gsutil retention clear`` command.\n\nThe ``set`` sub-command can set a retention policy with the following formats:",
"TEMP": "The ``gsutil retention temp`` command enables or disables a temporary hold\non an object."
}
},
"rewrite": {
"capsule": "Rewrite objects",
"commands": {},
"flags": {
"-I": {
"attr": {},
"category": "",
"default": "",
"description": "Causes gsutil to read the list of objects to rewrite from stdin.\nThis allows you to run a program that generates the list of\nobjects to rewrite.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-I",
"nargs": "0",
"type": "bool",
"value": ""
},
"-O": {
"attr": {},
"category": "",
"default": "",
"description": "When a bucket has uniform bucket-level access (UBLA) enabled,\nthe -O flag is required and skips all ACL checks. When a\nbucket has UBLA disabled, the -O flag rewrites objects with the\nbucket's default object ACL instead of the existing object ACL.\nThis is needed if you do not have OWNER permission on the\nobject.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-O",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Continues silently (without printing error messages) despite\nerrors when rewriting multiple objects. If some of the objects\ncould not be rewritten, gsutil's exit status is non-zero even\nif this flag is set. This option is implicitly set when running\n\"gsutil -m rewrite ...\".",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-k": {
"attr": {},
"category": "",
"default": "",
"description": "Rewrite objects with the current encryption key specified in\nyour boto configuration file. The value for encryption_key may\nbe either a base64-encoded CSEK or a fully-qualified KMS key\nname. If no value is specified for encryption_key, gsutil\nignores this flag. Instead, rewritten objects are encrypted with\nthe bucket's default KMS key, if one is set, or Google-managed\nencryption, if no default KMS key is set.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-k",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "The -R and -r options are synonymous. Causes bucket or bucket\nsubdirectory contents to be rewritten recursively.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
},
"-s": {
"attr": {},
"category": "",
"default": "",
"description": " Rewrite objects using the specified storage class.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-s",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"rewrite"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The gsutil rewrite command rewrites cloud objects, applying the specified\ntransformations to them. The transformation(s) are atomic for each affected\nobject and applied based on the input transformation flags. Object metadata\nvalues are preserved unless altered by a transformation. At least one\ntransformation flag, -k or -s, must be included in the command.\n\nThe -k flag is supported to add, rotate, or remove encryption keys on\nobjects. For example, the command:\n\n gsutil rewrite -k -r gs://bucket\n\nupdates all objects in gs://bucket with the current encryption key\nfrom your boto config file, which may either be a base64-encoded CSEK or the\nfully-qualified name of a Cloud KMS key.\n\nThe rewrite command acts only on live object versions, so specifying a\nURL with a generation number fails. If you want to rewrite a noncurrent\nversion, first copy it to the live version, then rewrite it, for example:\n\n gsutil cp gs://bucket/object#123 gs://bucket/object\n gsutil rewrite -k gs://bucket/object\n\nYou can use the -s option to specify a new storage class for objects. For\nexample, the command:\n\n gsutil rewrite -s nearline gs://bucket/foo\n\nrewrites the object, changing its storage class to nearline.\n\nIf you specify the -k option and you have an encryption key set in your boto\nconfiguration file, the rewrite command skips objects that are already\nencrypted with the specified key. For example, if you run:\n\n gsutil rewrite -k -r gs://bucket\n\nand gs://bucket contains objects encrypted with the key specified in your boto\nconfiguration file, gsutil skips rewriting those objects and only rewrites\nobjects that are not encrypted with the specified key. This avoids the cost of\nperforming redundant rewrite operations.\n\nIf you specify the -k option and you do not have an encryption key set in your\nboto configuration file, gsutil always rewrites each object, without\nexplicitly specifying an encryption key. This results in rewritten objects\nbeing encrypted with either the bucket's default KMS key (if one is set) or\nGoogle-managed encryption (no CSEK or CMEK). Gsutil does not attempt to\ndetermine whether the operation is redundant (and thus skippable) because\ngsutil cannot be sure how the object is encrypted after the rewrite. Note that\nif your goal is to encrypt objects with a bucket's default KMS key, you can\navoid redundant rewrite costs by specifying the bucket's default KMS key in\nyour boto configuration file; this allows gsutil to perform an accurate\ncomparison of the objects' current and desired encryption configurations and\nskip rewrites for objects already encrypted with that key.\n\nIf have an encryption key set in your boto configuration file and specify\nmultiple transformations, gsutil only skips those that would not change\nthe object's state. For example, if you run:\n\n gsutil rewrite -s nearline -k -r gs://bucket\n\nand gs://bucket contains objects that already match the encryption\nconfiguration but have a storage class of standard, the only transformation\napplied to those objects would be the change in storage class.\n\nYou can pass a list of URLs (one per line) to rewrite on stdin instead of as\ncommand line arguments by using the -I option. This allows you to use gsutil\nin a pipeline to rewrite objects identified by a program, such as:\n\n some_program | gsutil -m rewrite -k -I\n\nThe contents of stdin can name cloud URLs and wildcards of cloud URLs.\n\nThe rewrite command requires OWNER permissions on each object to preserve\nobject ACLs. You can bypass this by using the -O flag, which causes\ngsutil not to read the object's ACL and instead apply the default object ACL\nto the rewritten object:\n\n gsutil rewrite -k -O -r gs://bucket"
}
},
"rm": {
"capsule": "Remove objects",
"commands": {},
"flags": {
"-I": {
"attr": {},
"category": "",
"default": "",
"description": "Causes gsutil to read the list of objects to remove from stdin.\nis allows you to run a program that generates the list of\njects to remove.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-I",
"nargs": "0",
"type": "bool",
"value": ""
},
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "Delete all versions of an object.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-f": {
"attr": {},
"category": "",
"default": "",
"description": "Continues silently (without printing error messages) despite\nrors when removing multiple objects. If some of the objects\nuld not be removed, gsutil's exit status will be non-zero even\n this flag is set. Execution will still halt if an inaccessible\ncket is encountered. This option is implicitly set when running\nsutil -m rm ...\".",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-f",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "The -R and -r options are synonymous. Causes bucket or bucket\nbdirectory contents (all objects and subdirectories that it\nntains) to be removed recursively. If used with a bucket-only\nL (like gs://bucket), after deleting objects and subdirectories\nutil deletes the bucket. This option implies the -a option and\nletes all object versions. If you only want to delete live\nject versions, use the `** wildcard\nttps://cloud.google.com/storage/docs/wildcards>`_\nstead of -r.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"rm"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "NOTE: As part of verifying the existence of objects prior to deletion,\n``gsutil rm`` makes ``GET`` requests to Cloud Storage for object metadata.\nThese requests incur `network and operations charges\n`_.\n\nThe gsutil rm command removes objects and/or buckets.\nFor example, the following command removes the object ``kitten.png``:\n\n gsutil rm gs://bucket/kitten.png\n\nUse the -r option to specify recursive object deletion. For example, the\nfollowing command removes gs://bucket/subdir and all objects and\nsubdirectories under it:\n\n gsutil rm -r gs://bucket/subdir\n\nWhen working with versioning-enabled buckets, note that the -r option removes\nall object versions in the subdirectory. To remove only the live version of\neach object in the subdirectory, use the `** wildcard\n`_.\n\nThe following command removes all versions of all objects in a bucket, and\nthen deletes the bucket:\n\n gsutil rm -r gs://bucket\n\nTo remove all objects and their versions from a bucket without deleting the\nbucket, use the ``-a`` option:\n\n gsutil rm -a gs://bucket/**\n\nIf you have a large number of objects to remove, use the ``gsutil -m`` option,\nwhich enables multi-threading/multi-processing:\n\n gsutil -m rm -r gs://my_bucket/subdir\n\nYou can pass a list of URLs (one per line) to remove on stdin instead of as\ncommand line arguments by using the -I option. This allows you to use gsutil\nin a pipeline to remove objects identified by a program, such as:\n\n some_program | gsutil -m rm -I\n\nThe contents of stdin can name cloud URLs and wildcards of cloud URLs.\n\nNote that ``gsutil rm`` refuses to remove files from the local file system.\nFor example, this fails:\n\n gsutil rm *.txt\n\nWARNING: Object removal cannot be undone. Cloud Storage is designed to give\ndevelopers a high amount of flexibility and control over their data, and\nGoogle maintains strict controls over the processing and purging of deleted\ndata. If you have concerns that your application software or your users may\nat some point erroneously delete or replace data, see\n`Options for controlling data lifecycles\n`_ for ways to\nprotect your data from accidental data deletion."
}
},
"rpo": {
"capsule": "Configure replication",
"commands": {
"get": {
"capsule": "Configure replication",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"rpo",
"get"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``rpo get`` command returns the replication setting\nfor the specified Cloud Storage buckets.",
"EXAMPLES": "Check if your buckets are using turbo replication:\n\n gsutil rpo get gs://redbucket gs://bluebucket"
}
},
"set": {
"capsule": "Configure replication",
"commands": {},
"flags": {},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"rpo",
"set"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``rpo set`` command configures turbo replication\nfor dual-region Google Cloud Storage buckets.",
"EXAMPLES": "Configure your buckets to use turbo replication:\n\n gsutil rpo set ASYNC_TURBO gs://redbucket gs://bluebucket\n\nConfigure your buckets to NOT use turbo replication:\n\n gsutil rpo set DEFAULT gs://redbucket gs://bluebucket"
}
}
},
"flags": {},
"groups": {},
"is_group": true,
"is_hidden": false,
"path": [
"gsutil",
"rpo"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The ``rpo`` command is used to retrieve or configure the\n`replication setting\n`_ of\ndual-region Cloud Storage buckets.\nThis command has two sub-commands: ``get`` and ``set``.",
"EXAMPLES": "Configure your buckets to use turbo replication:\n\n gsutil rpo set ASYNC_TURBO gs://redbucket gs://bluebucket\n\nConfigure your buckets to NOT use turbo replication:\n\n gsutil rpo set DEFAULT gs://redbucket gs://bluebucket",
"GET": "The ``rpo get`` command returns the replication setting\nfor the specified Cloud Storage buckets.",
"SET": "The ``rpo set`` command configures turbo replication\nfor dual-region Google Cloud Storage buckets."
}
},
"rsync": {
"capsule": "Synchronize content of two buckets/directories",
"commands": {},
"flags": {
"-C": {
"attr": {},
"category": "",
"default": "",
"description": "If an error occurs, continue to attempt to copy the remaining\n files. If errors occurred, gsutil's exit status will be\n non-zero even if this flag is set. This option is implicitly\n set when running \"gsutil -m rsync...\".\n NOTE: -C only applies to the actual copying operation. If an\n error occurs while iterating over the files in the local\n directory (e.g., invalid Unicode file name) gsutil will print\n an error message and abort.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-C",
"nargs": "0",
"type": "bool",
"value": ""
},
"-J": {
"attr": {},
"category": "",
"default": "",
"description": "Applies gzip transport encoding to file uploads. This option\n works like the -j option described above, but it applies to\n all uploaded files, regardless of extension.\n CAUTION: If you use this option and some of the source files\n don't compress well (e.g., that's often true of binary data),\n this option may result in longer uploads.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-J",
"nargs": "0",
"type": "bool",
"value": ""
},
"-P": {
"attr": {},
"category": "",
"default": "",
"description": "Causes POSIX attributes to be preserved when objects are\n copied. With this feature enabled, gsutil rsync will copy\n fields provided by stat. These are the user ID of the owner,\n the group ID of the owning group, the mode (permissions) of the\n file, and the access/modification timestamps of the file. For\n downloads, these attributes will only be set if the source\n objects were uploaded with this flag enabled.\n On Windows, this flag will only set and restore access time and\n modification time. This is because Windows doesn't have a\n notion of POSIX uid/gid/mode.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-P",
"nargs": "0",
"type": "bool",
"value": ""
},
"-U": {
"attr": {},
"category": "",
"default": "",
"description": "Skip objects with unsupported object types instead of failing.\n Unsupported object types are Amazon S3 Objects in the GLACIER\n storage class.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-U",
"nargs": "0",
"type": "bool",
"value": ""
},
"-a": {
"attr": {},
"category": "",
"default": "",
"description": "canned_acl Sets named canned_acl when uploaded objects created. See\n \"gsutil help acls\" for further details. Note that rsync will\n decide whether or not to perform a copy based only on object\n size and modification time, not current ACL state. Also see the\n -p option below.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-a",
"nargs": "0",
"type": "bool",
"value": ""
},
"-c": {
"attr": {},
"category": "",
"default": "",
"description": "Causes the rsync command to compute and compare checksums\n (instead of comparing mtime) for files if the size of source\n and destination match. This option increases local disk I/O and\n run time if either src_url or dst_url are on the local file\n system.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-c",
"nargs": "0",
"type": "bool",
"value": ""
},
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Delete extra files under dst_url not found under src_url. By\n default extra files are not deleted.\n NOTE: this option can delete data quickly if you specify the\n wrong source/destination combination. See the help section\n above, \"BE CAREFUL WHEN USING -d OPTION!\".",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-e": {
"attr": {},
"category": "",
"default": "",
"description": "Exclude symlinks. When specified, symbolic links will be\n ignored. Note that gsutil does not follow directory symlinks,\n regardless of whether -e is specified.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-e",
"nargs": "0",
"type": "bool",
"value": ""
},
"-i": {
"attr": {},
"category": "",
"default": "",
"description": "Skip copying any files that already exist at the destination,\n regardless of their modification time.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-i",
"nargs": "0",
"type": "bool",
"value": ""
},
"-j": {
"attr": {},
"category": "",
"default": "",
"description": " Applies gzip transport encoding to any file upload whose\n extension matches the -j extension list. This is useful when\n uploading files with compressible content (such as .js, .css,\n or .html files) because it saves network bandwidth while\n also leaving the data uncompressed in Google Cloud Storage.\n When you specify the -j option, files being uploaded are\n compressed in-memory and on-the-wire only. Both the local\n files and Cloud Storage objects remain uncompressed. The\n uploaded objects retain the Content-Type and name of the\n original files.\n Note that if you want to use the top-level -m option to\n parallelize copies along with the -j/-J options, your\n performance may be bottlenecked by the\n \"max_upload_compression_buffer_size\" boto config option,\n which is set to 2 GiB by default. This compression buffer\n size can be changed to a higher limit, e.g.:\n gsutil -o \"GSUtil:max_upload_compression_buffer_size=8G\" \\\n -m rsync -j html,txt /local/source/dir gs://bucket/path",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-j",
"nargs": "0",
"type": "bool",
"value": ""
},
"-n": {
"attr": {},
"category": "",
"default": "",
"description": "Causes rsync to run in \"dry run\" mode, i.e., just outputting\n what would be copied or deleted without actually doing any\n copying/deleting.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-n",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Causes ACLs to be preserved when objects are copied. Note that\n rsync will decide whether or not to perform a copy based only\n on object size and modification time, not current ACL state.\n Thus, if the source and destination differ in size or\n modification time and you run gsutil rsync -p, the file will be\n copied and ACL preserved. However, if the source and\n destination don't differ in size or checksum but have different\n ACLs, running gsutil rsync -p will have no effect.\n Note that this option has performance and cost implications\n when using the XML API, as it requires separate HTTP calls for\n interacting with ACLs. The performance issue can be mitigated\n to some degree by using gsutil -m rsync to cause parallel\n synchronization. Also, this option only works if you have OWNER\n access to all of the objects that are copied.\n You can avoid the additional performance and cost of using\n rsync -p if you want all objects in the destination bucket to\n end up with the same ACL by setting a default object ACL on\n that bucket instead of using rsync -p. See 'gsutil help\n defacl'.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": "The -R and -r options are synonymous. Causes directories,\n buckets, and bucket subdirectories to be synchronized\n recursively. If you neglect to use this option gsutil will make\n only the top-level directory in the source and destination URLs\n match, skipping any sub-directories.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": "When a file/object is present in both the source and\n destination, if mtime is available for both, do not perform\n the copy if the destination mtime is newer.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
},
"-x": {
"attr": {},
"category": "",
"default": "",
"description": "pattern Causes files/objects matching pattern to be excluded, i.e., any\n matching files/objects are not copied or deleted. Note that the\n pattern is a `Python regular expression\n `_, not a wildcard\n (so, matching any string ending in \"abc\" would be specified\n using \".*abc$\" rather than \"*abc\"). Note also that the exclude\n path is always relative (similar to Unix rsync or tar exclude\n options). For example, if you run the command:\n gsutil rsync -x \"data.[/\\\\].*\\.txt$\" dir gs://my-bucket\n it skips the file dir/data1/a.txt.\n You can use regex alternation to specify multiple exclusions,\n for example:\n gsutil rsync -x \".*\\.txt$|.*\\.jpg$\" dir gs://my-bucket\n skips all .txt and .jpg files in dir.\n NOTE: When using the Windows cmd.exe command line interpreter,\n use ^ as an escape character instead of \\ and escape the |\n character. When using Windows PowerShell, use ' instead of \"\n and surround the | character with \".",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-x",
"nargs": "0",
"type": "bool",
"value": ""
},
"-y": {
"attr": {},
"category": "",
"default": "",
"description": "pattern Similar to the -x option, but the command will first skip\n directories/prefixes using the provided pattern and then\n exclude files/objects using the same pattern. This is usually\n much faster, but won't work as intended with negative\n lookahead patterns. For example, if you run the command:\n gsutil rsync -y \"^(?!.*\\.txt$).*\" dir gs://my-bucket\n This would first exclude all subdirectories unless they end in\n .txt before excluding all files except those ending in .txt.\n Running the same command with the -x option would result in all\n .txt files being included, regardless of whether they appear in\n subdirectories that end in .txt.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-y",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"rsync"
],
"positionals": [],
"release": "GA",
"sections": {
"CAREFUL WHEN SYNCHRONIZING OVER OS-SPECIFIC FILE TYPES (SYMLINKS, DEVICES, ETC.)": "Running gsutil rsync over a directory containing operating system-specific\nfile types (symbolic links, device files, sockets, named pipes, etc.) can\ncause various problems. For example, running a command like:\n\n gsutil rsync -r ./dir gs://my-bucket\n\nwill cause gsutil to follow any symbolic links in ./dir, creating objects in\nmy-bucket containing the data from the files to which the symlinks point. This\ncan cause various problems:\n\n* If you use gsutil rsync as a simple way to backup a directory to a bucket,\n restoring from that bucket will result in files where the symlinks used\n to be. At best this is wasteful of space, and at worst it can result in\n outdated data or broken applications -- depending on what is consuming\n the symlinks.\n\n* If you use gsutil rsync over directories containing broken symlinks,\n gsutil rsync will abort (unless you pass the -e option).\n\n* gsutil rsync skips symlinks that point to directories.\n\nSince gsutil rsync is intended to support data operations (like moving a data\nset to the cloud for computational processing) and it needs to be compatible\nboth in the cloud and across common operating systems, there are no plans for\ngsutil rsync to support operating system-specific file types like symlinks.\n\nWe recommend that users do one of the following:\n\n* Don't use gsutil rsync over directories containing symlinks or other OS-\n specific file types.\n* Use the -e option to exclude symlinks or the -x option to exclude\n OS-specific file types by name.\n* Use a tool (such as tar) that preserves symlinks and other OS-specific file\n types, packaging up directories containing such files before uploading to\n the cloud.",
"CONSISTENCY WITH NON-GOOGLE CLOUD PROVIDERS": "While Google Cloud Storage is strongly consistent, some cloud providers\nonly support eventual consistency. You may encounter scenarios where rsync\nsynchronizes using stale listing data when working with these other cloud\nproviders. For example, if you run rsync immediately after uploading an\nobject to an eventually consistent cloud provider, the added object may not\nyet appear in the provider's listing. Consequently, rsync will miss adding\nthe object to the destination. If this happens you can rerun the rsync\noperation again later (after the object listing has \"caught up\").",
"DESCRIPTION": "The gsutil rsync command makes the contents under dst_url the same as the\ncontents under src_url, by copying any missing files/objects (or those whose\ndata has changed), and (if the -d option is specified) deleting any extra\nfiles/objects. src_url must specify a directory, bucket, or bucket\nsubdirectory. For example, to sync the contents of the local directory \"data\"\nto the bucket gs://mybucket/data, you could do:\n\n gsutil rsync data gs://mybucket/data\n\nTo recurse into directories use the -r option:\n\n gsutil rsync -r data gs://mybucket/data\n\nIf you have a large number of objects to synchronize you might want to use the\ngsutil -m option (see \"gsutil help options\"), to perform parallel\n(multi-threaded/multi-processing) synchronization:\n\n gsutil -m rsync -r data gs://mybucket/data\n\nThe -m option typically will provide a large performance boost if either the\nsource or destination (or both) is a cloud URL. If both source and\ndestination are file URLs the -m option will typically thrash the disk and\nslow synchronization down.\n\nNote 1: Shells (like bash, zsh) sometimes attempt to expand wildcards in ways\nthat can be surprising. Also, attempting to copy files whose names contain\nwildcard characters can result in problems. For more details about these\nissues see `Wildcard behavior considerations\n`_.\n\nNote 2: If you are synchronizing a large amount of data between clouds you\nmight consider setting up a\n`Google Compute Engine `_\naccount and running gsutil there. Since cross-provider gsutil data transfers\nflow through the machine where gsutil is running, doing this can make your\ntransfer run significantly faster than running gsutil on your local\nworkstation.\n\nNote 3: rsync does not copy empty directory trees, since Cloud Storage uses a\n`flat namespace `_.",
"DETECTION ALGORITHM": "To determine if a file or object has changed, gsutil rsync first checks\nwhether the file modification time (mtime) of both the source and destination\nis available. If mtime is available at both source and destination, and the\ndestination mtime is different than the source, or if the source and\ndestination file size differ, gsutil rsync will update the destination. If the\nsource is a cloud bucket and the destination is a local file system, and if\nmtime is not available for the source, gsutil rsync will use the time created\nfor the cloud object as a substitute for mtime. Otherwise, if mtime is not\navailable for either the source or the destination, gsutil rsync will fall\nback to using checksums. If the source and destination are both cloud buckets\nwith checksums available, gsutil rsync will use these hashes instead of mtime.\nHowever, gsutil rsync will still update mtime at the destination if it is not\npresent. If the source and destination have matching checksums and only the\nsource has an mtime, gsutil rsync will copy the mtime to the destination. If\nneither mtime nor checksums are available, gsutil rsync will resort to\ncomparing file sizes.\n\nChecksums will not be available when comparing composite Google Cloud Storage\nobjects with objects at a cloud provider that does not support CRC32C (which\nis the only checksum available for composite objects). See 'gsutil help\ncompose' for details about composite objects.",
"HANDLING": "The rsync command retries failures when it is useful to do so, but if\nenough failures happen during a particular copy or delete operation, or if\na failure isn't retryable, the overall command fails.\n\nIf the -C option is provided, the command instead skips failing objects and\nmoves on. At the end of the synchronization run, if any failures were not\nsuccessfully retried, the rsync command reports the count of failures and\nexits with non-zero status. At this point you can run the rsync command\nagain, and gsutil attempts any remaining needed copy and/or delete\noperations.\n\nFor more details about gsutil's retry handling, see `Retry strategy\n`_.",
"IN THE CLOUD AND METADATA PRESERVATION": "If both the source and destination URL are cloud URLs from the same provider,\ngsutil copies data \"in the cloud\" (i.e., without downloading to and uploading\nfrom the machine where you run gsutil). In addition to the performance and\ncost advantages of doing this, copying in the cloud preserves metadata (like\nContent-Type and Cache-Control). In contrast, when you download data from the\ncloud it ends up in a file, which has no associated metadata, other than file\nmodification time (mtime). Thus, unless you have some way to hold on to or\nre-create that metadata, synchronizing a bucket to a directory in the local\nfile system will not retain the metadata other than mtime.\n\nNote that by default, the gsutil rsync command does not copy the ACLs of\nobjects being synchronized and instead will use the default bucket ACL (see\n\"gsutil help defacl\"). You can override this behavior with the -p option. See\nthe `Options section\n`_ to\nlearn how.",
"LIMITATIONS": "1. The gsutil rsync command will only allow non-negative file modification\n times to be used in its comparisons. This means gsutil rsync will resort to\n using checksums for any file with a timestamp before 1970-01-01 UTC.\n\n2. The gsutil rsync command considers only the live object version in\n the source and destination buckets when deciding what to copy / delete. If\n versioning is enabled in the destination bucket then gsutil rsync's\n replacing or deleting objects will end up creating versions, but the\n command doesn't try to make any noncurrent versions match in the source\n and destination buckets.\n\n3. The gsutil rsync command does not support copying special file types\n such as sockets, device files, named pipes, or any other non-standard\n files intended to represent an operating system resource. If you run\n gsutil rsync on a source directory that includes such files (for example,\n copying the root directory on Linux that includes /dev ), you should use\n the -x flag to exclude these files. Otherwise, gsutil rsync may fail or\n hang.\n\n4. The gsutil rsync command copies changed files in their entirety and does\n not employ the\n `rsync delta-transfer algorithm `_\n to transfer portions of a changed file. This is because Cloud Storage\n objects are immutable and no facility exists to read partial object\n checksums or perform partial replacements."
}
},
"setmeta": {
"capsule": "Set metadata on already uploaded objects",
"commands": {},
"flags": {
"-h": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies a header:value to be added, or header to be removed,\nom each named object.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-h",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"setmeta"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The gsutil setmeta command allows you to set or remove the metadata on one\nor more objects. It takes one or more header arguments followed by one or\nmore URLs, where each header argument is in one of two forms:\n\n- If you specify ``header:value``, it sets the provided value for the\n given header on all applicable objects.\n\n- If you specify ``header`` (with no value), it removes the given header\n from all applicable objects.\n\nFor example, the following command sets the ``Content-Type`` and\n``Cache-Control`` headers while also removing the ``Content-Disposition``\nheader on the specified objects:\n\n gsutil setmeta -h \"Content-Type:text/html\" \\\n -h \"Cache-Control:public, max-age=3600\" \\\n -h \"Content-Disposition\" gs://bucket/*.html\n\nIf you have a large number of objects to update you might want to use the\ngsutil -m option, to perform a parallel (multi-threaded/multi-processing)\nupdate:\n\n gsutil -m setmeta -h \"Content-Type:text/html\" \\\n -h \"Cache-Control:public, max-age=3600\" \\\n -h \"Content-Disposition\" gs://bucket/*.html\n\nYou can also use the setmeta command to set custom metadata on an object:\n\n gsutil setmeta -h \"x-goog-meta-icecreamflavor:vanilla\" gs://bucket/object\n\nCustom metadata is always prefixed in gsutil with ``x-goog-meta-``. This\ndistinguishes it from standard request headers. Other tools that send and\nreceive object metadata by using the request body do not use this prefix.\n\nSee \"gsutil help metadata\" for details about how you can set metadata\nwhile uploading objects, what metadata fields can be set and the meaning of\nthese fields, use of custom metadata, and how to view currently set metadata.\n\nNOTE: By default, publicly readable objects are served with a Cache-Control\nheader allowing such objects to be cached for 3600 seconds. For more details\nabout this default behavior see the CACHE-CONTROL section of\n\"gsutil help metadata\". If you need to ensure that updates become visible\nimmediately, you should set a Cache-Control header of \"Cache-Control:private,\nmax-age=0, no-transform\" on such objects. You can do this with the command:\n\n gsutil setmeta -h \"Content-Type:text/html\" \\\n -h \"Cache-Control:private, max-age=0, no-transform\" gs://bucket/*.html\n\nThe setmeta command reads each object's current generation and metageneration\nand uses those as preconditions unless they are otherwise specified by\ntop-level arguments. For example, the following command sets the custom\nmetadata ``icecreamflavor:vanilla`` if the current live object has a\nmetageneration of 2:\n\n gsutil -h \"x-goog-if-metageneration-match:2\" setmeta\n -h \"x-goog-meta-icecreamflavor:vanilla\""
}
},
"signurl": {
"capsule": "Create a signed URL",
"commands": {},
"flags": {
"-b": {
"attr": {},
"category": "",
"default": "",
"description": " Allows you to specify a user project that will be billed for\nrequests that use the signed URL. This is useful for generating\npresigned links for buckets that use requester pays.\nNote that it's not valid to specify both the ``-b`` and\n``--use-service-account`` options together.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-b",
"nargs": "0",
"type": "bool",
"value": ""
},
"-c": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies the content type for which the signed URL is\nvalid for.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-c",
"nargs": "0",
"type": "bool",
"value": ""
},
"-d": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies the duration that the signed URL should be valid\nfor, default duration is 1 hour.\nTimes may be specified with no suffix (default hours), or\nwith s = seconds, m = minutes, h = hours, d = days.\nThis option may be specified multiple times, in which case\nthe duration the link remains valid is the sum of all the\nduration options.\nThe max duration allowed is 7 days when ``private-key-file``\nis used.\nThe max duration allowed is 12 hours when -u option is used.\nThis limitation exists because the system-managed key used to\nsign the URL may not remain valid after 12 hours.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-d",
"nargs": "0",
"type": "bool",
"value": ""
},
"-m": {
"attr": {},
"category": "",
"default": "",
"description": "Specifies the HTTP method to be authorized for use\nwith the signed URL, default is GET. You may also specify\nRESUMABLE to create a signed resumable upload start URL. When\nusing a signed URL to start a resumable upload session, you will\nneed to specify the 'x-goog-resumable:start' header in the\nrequest or else signature validation will fail.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-m",
"nargs": "0",
"type": "bool",
"value": ""
},
"-p": {
"attr": {},
"category": "",
"default": "",
"description": "Specify the private key password instead of prompting.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-p",
"nargs": "0",
"type": "bool",
"value": ""
},
"-r": {
"attr": {},
"category": "",
"default": "",
"description": " Specifies the `region\n`_ in\nwhich the resources for which you are creating signed URLs are\nstored.\nDefault value is 'auto' which will cause gsutil to fetch the\nregion for the resource. When auto-detecting the region, the\ncurrent gsutil user's credentials, not the credentials from the\nprivate-key-file, are used to fetch the bucket's metadata.\nThis option must be specified and not 'auto' when generating a\nsigned URL to create a bucket.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-r",
"nargs": "0",
"type": "bool",
"value": ""
},
"-u": {
"attr": {},
"category": "",
"default": "",
"description": "Use service account credentials instead of a private key file\nto sign the URL.\nYou can also use the ``--use-service-account`` option,\nwhich is equivalent to ``-u``.\nNote that both options have a maximum allowed duration of\n12 hours for a valid link.",
"group": "",
"is_global": false,
"is_hidden": false,
"is_required": false,
"name": "-u",
"nargs": "0",
"type": "bool",
"value": ""
}
},
"groups": {},
"is_group": false,
"is_hidden": false,
"path": [
"gsutil",
"signurl"
],
"positionals": [],
"release": "GA",
"sections": {
"DESCRIPTION": "The signurl command will generate a signed URL that embeds authentication data\nso the URL can be used by someone who does not have a Google account. Please\nsee the `Signed URLs documentation\n`_ for\nbackground about signed URLs.\n\nMultiple gs:// URLs may be provided and may contain wildcards. A signed URL\nwill be produced for each provided URL, authorized\nfor the specified HTTP method and valid for the given duration.\n\nNOTE: Unlike the gsutil ls command, the signurl command does not support\noperations on sub-directories. For example, unless you have an object named\n``some-directory/`` stored inside the bucket ``some-bucket``, the following\ncommand returns an error: ``gsutil signurl gs://some-bucket/some-directory/``\n\nThe signurl command uses the private key for a service account (the\n'' argument) to generate the cryptographic\nsignature for the generated URL. The private key file must be in PKCS12\nor JSON format. If the private key is encrypted the signed URL command will\nprompt for the passphrase used to protect the private key file\n(default 'notasecret'). For more information regarding generating a private\nkey for use with the signurl command please see the `Authentication\ndocumentation.\n`_\n\nIf you used `service account credentials\n`_\nfor authentication, you can replace the argument with\nthe -u or --use-service-account option to use the system-managed private key\ndirectly. This avoids the need to download the private key file.",
"USAGE": "Create a signed URL for downloading an object valid for 10 minutes:\n\n gsutil signurl -d 10m gs:///