How to set access permissions of google cloud storage bucket folder
Asked Answered
S

8

18

How do I set access permissions for entire folder in storage bucket? Example; I have 2 folders (containing many subfolders/objects) in single bucket (let's call them folder 'A' and 'B') and 4 members in project team. All 4 members can have read/edit access for folder A but only 2 of the members are allowed to have access to folder 'B'. Is there a simple way to set these permissions for each folder? There are hundreds/thousands of files within each folder and it would be very time consuming to set permissions for each individual file. Thanks for any help.

Sherwin answered 25/7, 2016 at 18:54 Comment(1)
I'm looking for the same functionality and didn't see one open yet so I've created a feature request here: issuetracker.google.com/issues/145082842Squashy
E
14

It looks like this has become possible through IAM Conditions.

You need to set a IAM Condition like: resource.name.startsWith('projects/_/buckets/[BUCKET_NAME]/objects/[OBJECT_PREFIX]')

This condition can't be used for the permission storage.objects.list though. Add two roles to a group/user. The first one to grant list access to the whole bucket and the second one containing the condition above to allow read/write access to all objects in your "folder". Like this the group/user can list all objects in the bucket, but can only read/download/write the allowed ones.

There are some limitations here, such as no longer being able to use the gsutil acl ch commands referenced in other answers.

Excavator answered 26/5, 2020 at 19:52 Comment(2)
I tried this, but don't seem to make it work when [OBJECT_PREFIX] has a slash in it, e.g. I'm don't seem to be able to allow access to files with prefix items/1234 only itemsApogee
I too require to restrict different users to different folders in my GCS bucket. I tried both the approaches i.e. Fine grained ACL and IAM Condition. Problem with fine grained ACL is it works only with existing objects and have to define mechanism for default ACL for objects which might added in future. IAM condition however really solves the problem of both restricting access at folder level as well as works for future objects. Additionally, it provides set of conditional operators which helps in building powerful conditional logic as require. Thank John for this answer.Limbus
C
14

Leaving this here so someone else doesn't waste an afternoon beating their head against this wall. It turns out that 'list' permissions are handled at the bucket level in GCS and you can't restrict them using a Condition based on object name prefix. If you do, you won't be able to access any resources in the bucket, so you have to setup the Member with unrestricted 'Storage Object Viewer' role and use Conditions with specified object prefix for 'Storage Object Admin' or 'Storage Object Creator' to restrict (over)write access. Not ideal if you are trying to keep the contents of your bucket private.

https://cloud.google.com/storage/docs/access-control/iam

"Since the storage.objects.list permission is granted at the bucket level, you cannot use the resource.name condition attribute to restrict object listing access to a subset of objects in the bucket. Users without storage.objects.list permission at the bucket level can experience degraded functionality for the Console and gsutil."

Canale answered 16/3, 2021 at 16:47 Comment(1)
I appreciate your concern for our heads! This wasn't immediately intuitive, but once you explained it, makes sense. Thank you.Ferdinana
M
12

It's very poorly documented, but search for "folder" in the gsutil acl ch manpage:

Grant the user with the specified canonical ID READ access to all objects in example-bucket that begin with folder/:

gsutil acl ch -r \
  -u 84fac329bceSAMPLE777d5d22b8SAMPLE785ac2SAMPLE2dfcf7c4adf34da46:R \
  gs://example-bucket/folder/
Miscellany answered 30/8, 2017 at 15:6 Comment(3)
Take note of #38575945 if you're trying to use this method. Brandon is correct that this does not set permissions for a "folder", but rather sets the ACL on all objects currently in the folder. New objects will have the default ACL.Salgado
@Salgado is correct. Does anyone know of a way to change the default ACL for new objects created in a particular folder?Canales
@pdoherty926, have not tested myself, but I suspect you can write a cloud function that point the newly uploaded object to the acl.Gorlin
V
2

You cannot do this in GCS. GCS provides permissions to buckets and permissions to objects. A "folder" is not a GCS concept and does not have any properties or permissions.

Vardhamana answered 25/7, 2016 at 21:42 Comment(2)
While you're right about the flat namespace (see cloud.google.com/storage/docs/gsutil/addlhelp/…), it does appear possible to set an ACL for a "folder" (see my answer)Miscellany
That gsutil command will indeed change the ACL of every object in that path, it will not affect any future objects that are added later.Vardhamana
S
2

As for 2024, Google Cloud Storage managed folders are now in preview.
With managed folders, you can organize your objects into groups and set IAM policies for more granular access control over data segments within a bucket.

https://cloud.google.com/storage/docs/managed-folders

Siglos answered 29/2 at 9:38 Comment(0)
F
1
  1. Make sure, you have configured your bucket to have Fine-Grained Permission.
  2. gsutil -m acl ch -r -g All:R gs://test/public/another/*

If doesn't work, 3. add yourself as gcs admin, legacy reader/writer permission. (which is irrelevant). But worked for me.

Freehold answered 5/10, 2021 at 19:6 Comment(0)
R
0

Requirements

  • Cloud Storage structure
fine-grain-test-biswalc/ 
├── test1/ 
│   ├── __init__.py 
│   └── utils.py 
└── test2/ 
    ├── __init__.py 
    └── globals.py

  • User A or Service Account A needs to have access only to test1 directory in the bucket.
  • When user access test2 directory, they should get an error.

Solution

  • Acquire a Service Account.
    • console.cloud.google.com > IAM & Admin > Service Accounts > CREATE SERVICE ACCOUNT
    • Select the Service Account > Keys tab > ADD KEY > CREATE NEW KEY > JSON
  • Authenticate against the new key on your command line tool:
    • gcloud auth activate-service-account --key-file=my-key.json
  • Create a custom role:
    • console.cloud.google.com > IAM & Admin > IAM > Roles > CREATE ROLE
    • Name: Storage.Objects.List
    • Role launch stage: General Availability
    • Permissions: storage.objects.list
  • Provide permissions to the Service Account:
    • console.cloud.google.com > IAM & Admin > IAM > GRANT ACCESS
    • Add Principal: my-sa.project-id.iam.gserviceaccount.com
    • Assign roles: Storage.Objects.List
  • Create bucket:
    • console.cloud.google.com > Cloud Storage > CREATE
    • Name: fine-grain-test-biswalc
    • Choose how to control access to objects > Access control > Uniform
  • Managed Folder
    • Once the folder structure is ready, goto the bucket,
    • On the Folder browser > Click on Three dots for test1 > Click Edit access
    • Click ATTACH MANAGED FOLDER
  • Provide permissions:
    • On the Right pane, you will see: # Permissions for test1/
    • Click ADD PRINCIPAL
    • Select my-sa.project-id.iam.gserviceaccount.com
    • Assign roles > Storage Admin
  • Test the setup using below commands on your command line tool:
    • gsutil -m cp -r "gs://fine-grain-test-biswalc/test1" .
      • Successful operation
    • gsutil -m cp -r "gs://fine-grain-test-biswalc/test2" .
      • Errors out, refer error snippet below.

Error snippet:

Copying gs://fine-grain-test-biswalc/test2/__init__.py...
Copying gs://fine-grain-test-biswalc/test2/globals.py...

AccessDeniedException: 403 HttpError accessing <https://storage.googleapis.com/download/storage/v1/b/fine-grain-test-biswalc/o/test2%2Fglobals.py?generation=XXXXXXX&alt=media>: response: <{'content-type': 'text/html; charset=UTF-8', 'date': 'Fri, 26 Apr 2024 22:14:56 GMT', 'vary': 'Origin, X-Origin', 'x-guploader-uploadid': 'XXXXXXX-XXXXXX', 'expires': 'Fri, XXXXXX GMT', 'cache-control': 'private, max-age=0', 'content-length': 'XXX', 'server': 'UploadServer', 'alt-svc': 'h3=":443"; ma=XXX,h3-29=":XXX"; ma=XXX', 'status': '403'}>, content <my-sa.project-id.iam.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object. Permission &#39;storage.objects.get&#39; denied on resource (or it may not exist).>

CommandException: 1 file/object could not be transferred.
Riff answered 27/4 at 0:41 Comment(1)
Its not really fine-grain test, should have chosen a better name for the bucket.Riff
E
-1

I tried all suggestions here including providing access with CEL. Then I come across why everyone is not successful in resolving this issue is because GCP does not treat folders as existing.

From https://cloud.google.com/storage/docs/folders:

Cloud Storage operates with a flat namespace, which means that folders don't actually exist within Cloud Storage. If you create an object named folder1/file.txt in the bucket your-bucket, the path to the object is your-bucket/folder1/file.txt, but there is no folder named folder1; instead, the string folder1 is part of the object's name.

It's just a visual representation that provides us a hierarchical feel of the bucket and objects within it.

Eclair answered 28/7, 2022 at 7:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.