How do I set access permissions for entire folder in storage bucket? Example; I have 2 folders (containing many subfolders/objects) in single bucket (let's call them folder 'A' and 'B') and 4 members in project team. All 4 members can have read/edit access for folder A but only 2 of the members are allowed to have access to folder 'B'. Is there a simple way to set these permissions for each folder? There are hundreds/thousands of files within each folder and it would be very time consuming to set permissions for each individual file. Thanks for any help.
It looks like this has become possible through IAM Conditions.
You need to set a IAM Condition like:
resource.name.startsWith('projects/_/buckets/[BUCKET_NAME]/objects/[OBJECT_PREFIX]')
This condition can't be used for the permission storage.objects.list
though. Add two roles to a group/user. The first one to grant list access to the whole bucket and the second one containing the condition above to allow read/write access to all objects in your "folder". Like this the group/user can list all objects in the bucket, but can only read/download/write the allowed ones.
There are some limitations here, such as no longer being able to use the gsutil acl ch
commands referenced in other answers.
[OBJECT_PREFIX]
has a slash in it, e.g. I'm don't seem to be able to allow access to files with prefix items/1234
only items
–
Apogee Leaving this here so someone else doesn't waste an afternoon beating their head against this wall. It turns out that 'list' permissions are handled at the bucket level in GCS and you can't restrict them using a Condition based on object name prefix. If you do, you won't be able to access any resources in the bucket, so you have to setup the Member with unrestricted 'Storage Object Viewer' role and use Conditions with specified object prefix for 'Storage Object Admin' or 'Storage Object Creator' to restrict (over)write access. Not ideal if you are trying to keep the contents of your bucket private.
https://cloud.google.com/storage/docs/access-control/iam
"Since the storage.objects.list permission is granted at the bucket level, you cannot use the resource.name condition attribute to restrict object listing access to a subset of objects in the bucket. Users without storage.objects.list permission at the bucket level can experience degraded functionality for the Console and gsutil."
It's very poorly documented, but search for "folder" in the gsutil acl ch
manpage:
Grant the user with the specified canonical ID READ access to all objects in example-bucket that begin with folder/:
gsutil acl ch -r \ -u 84fac329bceSAMPLE777d5d22b8SAMPLE785ac2SAMPLE2dfcf7c4adf34da46:R \ gs://example-bucket/folder/
You cannot do this in GCS. GCS provides permissions to buckets and permissions to objects. A "folder" is not a GCS concept and does not have any properties or permissions.
As for 2024, Google Cloud Storage managed folders are now in preview.
With managed folders, you can organize your objects into groups and set IAM policies for more granular access control over data segments within a bucket.
- Make sure, you have configured your bucket to have Fine-Grained Permission.
- gsutil -m acl ch -r -g All:R gs://test/public/another/*
If doesn't work, 3. add yourself as gcs admin, legacy reader/writer permission. (which is irrelevant). But worked for me.
Requirements
- Cloud Storage structure
fine-grain-test-biswalc/
├── test1/
│ ├── __init__.py
│ └── utils.py
└── test2/
├── __init__.py
└── globals.py
- User A or Service Account A needs to have access only to
test1
directory in the bucket. - When user access
test2
directory, they should get an error.
Solution
- Acquire a Service Account.
- console.cloud.google.com >
IAM & Admin
>Service Accounts
>CREATE SERVICE ACCOUNT
- Select the
Service Account
>Keys
tab >ADD KEY
>CREATE NEW KEY
>JSON
- console.cloud.google.com >
- Authenticate against the new key on your command line tool:
gcloud auth activate-service-account --key-file=my-key.json
- Create a custom role:
- console.cloud.google.com >
IAM & Admin
>IAM
>Roles
>CREATE ROLE
- Name:
Storage.Objects.List
- Role launch stage:
General Availability
- Permissions:
storage.objects.list
- console.cloud.google.com >
- Provide permissions to the Service Account:
- console.cloud.google.com >
IAM & Admin
>IAM
>GRANT ACCESS
- Add Principal:
my-sa.project-id.iam.gserviceaccount.com
- Assign roles:
Storage.Objects.List
- console.cloud.google.com >
- Create bucket:
- console.cloud.google.com >
Cloud Storage
>CREATE
- Name:
fine-grain-test-biswalc
Choose how to control access to objects
>Access control
>Uniform
- console.cloud.google.com >
- Managed Folder
- Once the folder structure is ready, goto the bucket,
- On the
Folder browser
> Click on Three dots fortest1
> ClickEdit access
- Click
ATTACH MANAGED FOLDER
- Provide permissions:
- On the Right pane, you will see:
# Permissions for test1/
- Click
ADD PRINCIPAL
- Select
my-sa.project-id.iam.gserviceaccount.com
Assign roles
>Storage Admin
- On the Right pane, you will see:
- Test the setup using below commands on your command line tool:
gsutil -m cp -r "gs://fine-grain-test-biswalc/test1" .
- Successful operation
gsutil -m cp -r "gs://fine-grain-test-biswalc/test2" .
- Errors out, refer error snippet below.
Error snippet:
Copying gs://fine-grain-test-biswalc/test2/__init__.py...
Copying gs://fine-grain-test-biswalc/test2/globals.py...
AccessDeniedException: 403 HttpError accessing <https://storage.googleapis.com/download/storage/v1/b/fine-grain-test-biswalc/o/test2%2Fglobals.py?generation=XXXXXXX&alt=media>: response: <{'content-type': 'text/html; charset=UTF-8', 'date': 'Fri, 26 Apr 2024 22:14:56 GMT', 'vary': 'Origin, X-Origin', 'x-guploader-uploadid': 'XXXXXXX-XXXXXX', 'expires': 'Fri, XXXXXX GMT', 'cache-control': 'private, max-age=0', 'content-length': 'XXX', 'server': 'UploadServer', 'alt-svc': 'h3=":443"; ma=XXX,h3-29=":XXX"; ma=XXX', 'status': '403'}>, content <my-sa.project-id.iam.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object. Permission 'storage.objects.get' denied on resource (or it may not exist).>
CommandException: 1 file/object could not be transferred.
fine-grain
test, should have chosen a better name for the bucket. –
Riff I tried all suggestions here including providing access with CEL. Then I come across why everyone is not successful in resolving this issue is because GCP does not treat folders as existing.
From https://cloud.google.com/storage/docs/folders:
Cloud Storage operates with a flat namespace, which means that folders don't actually exist within Cloud Storage. If you create an object named folder1/file.txt in the bucket your-bucket, the path to the object is your-bucket/folder1/file.txt, but there is no folder named folder1; instead, the string folder1 is part of the object's name.
It's just a visual representation that provides us a hierarchical feel of the bucket and objects within it.
© 2022 - 2024 — McMap. All rights reserved.