Managing access to AWS services on iOS clients vs on backend servers
Asked Answered
B

4

19

When designing an iOS app that will interact with AWS (e.g. S3, CloudFront, etc), what are the pros and cons of managing the access to these services on the client vs. on the server?

By "managing the access", I mean things like uploading private content to S3, downloading private content via Cloudfront.

Of course, whichever side that handles the access will need to store the AWS access key and access secret. Security is one of the concerns.

I am equally interested in the impacts of this design choice on the performance and the flexibility of either implementation.

Lastly, is there an argument for implementing a hybrid approach where both client and server interact directly with AWS, or does the implementation usually go with either one or the other, but not both?

Briant answered 19/4, 2015 at 19:10 Comment(0)
A
10

While there are many scenarios when you might want to doing this either way in general, there are hardly any cases when you would want to do this directly from client since you mention ios:

Pros for uploading data via server side to AWS:

  1. Security

    As already mentioned in the other answer, having authenticated requests initially would save you much hassle from miscreants and hackers if they are trying to break stuff down. If the data is private and you are truly committed to privacy, any data leak would be easier to prevent if the system is authenticated.

  2. Rate limiting, user quotas, etc

    The added advantage of authenticated systems is that you can rate limit the requests coming from particular sources, say user, group, IP etc (app level quotas if you intend to build multiple apps around same system architecture). Building this intelligence is not that easy when you are working directly on the client side.

  3. Audit trail

    If you need to keep track of who uploaded what, when, from where, and more such information, this is once again much easier to track if you get the initial request directly on your server.

  4. Exception handling on failure

    It is quite possible to have failures that you wouldn't have easily predicted, or missing a critical bug during QA testing. Handling these server side is much more efficient, because it is under your control. When any such issues arise on the client side, you are at the mercy of your clients being able to upgrade the app. If you are dealing with this server side, additional checks could easily be placed/deployed for many such bugs, limiting the scope of the bug.

  5. Time to go live

    Again, as mentioned in the other answer, it can take a while before your update is approved. This greatly brings down your responsiveness to critical issues, and can be hard to mitigate in case of serious issues (data leak/privacy breach) leading to significant losses (financial / user trust / negative ratings etc)

The only cases when I think you would want to upload data directly from client side to AWS would be

  • Uploading large amounts of data, very, very frequently, without direct processing.

    If uploading once costs certain amount of bandwidth and network resources, uploading it twice claims double the resources (once from client --> server, then from server --> AWS). So, if you upload large amounts of data frequently (think TBs daily), then you end up wasting a lot of your resources just copying data from one point to another. In such cases, it will make sense to push the data directly S3. But for this approach to work, your cost savings should be high enough to override the concerns about security and privacy, and for most applications, that is simply not the case.

  • You are in a walled garden

    Basically, the app works only for certain pre-identified users, the app simply doesn't work for anyone else (say you were developing this for in house use in a corporate). In essence, this means having a 100% confidence in the end user's motives to use your app.


EDIT: OP asks in comments

How about server providing signed URL/cookies, and client uses these to upload to S3 or download from Cloudfront. Client still directly interacts with AWS but requires permissions controlled by server.

At first glance, it seems to me this is very workable. This blog post provides many use cases (like providing wildcard signed urls for reading) around using signed urls (though the examples are in .NET) and more information is available at AWS docs.

Since you are going to handle the signing server side, you can easily take care of each of the points I mentioned earlier in my post (rate limiting, user quotas, audit trail etc is all workable, since the request will initially go to the server). As this answer mentions,

Signing URLs helps control who can access a given file and how long they can access it for.

Overall, this should work well for quite a few use cases.

Abuttal answered 27/4, 2015 at 5:5 Comment(4)
How about server providing signed URL/cookies, and client uses these to upload to S3 or download from Cloudfront. Client still directly interacts with AWS but requires permissions controlled by server. Can you comment on this strategy?Briant
@Briant I don't think that will be very effective, check edits.Abuttal
I am not sure your edit regarding signed URLs is correct. The signed URLs do not need to be unsigned by the client in this case, instead, the client simply uses these URLs as given to PUT or GET from AWS resources. And you can set an expiry time on these URLs so that client cannot access AWS resource after expiry. I appreciate your other points.Briant
@Briant You are right, the edit was wrong, I've rollback'ed the edit. Reading more on this, the signed URL's seem like a good secure option that could work. This blog post provides many use cases (like providing wildcard signed urls for reading). Since you are going to handle the signing server side, you can easily take care of each fhe point I mention in my post (rate limiting, user quotas, audit trail etc is all workable, since the request will initially go to the server). All in all, a best of both worlds I think.Abuttal
E
3

Security is the main reason I would place all/most of the AWS service authentication on the back-end after you have authenticated the user.

Another consideration is the amount of time it takes to refresh your application on the Apple Store given their approval process. It can take days depending on the apple store queue for you to push changes to your app; changes on the AWS back-end can be done at will.

Also, in designing an app to interact with AWS services, I always assume that anything transmitted can be compromised and will very likely be used by folks who have deconstructed your calls and reconstructed their own to suit their needs.

(For example, shortly after launching a photo entertainment application which uploads images and then applies filters, we noticed log entries with filter ids that did not exist in the app coming through from the same IP. They were not successful because they did not get authenticated.)

Hope that helps.

Endocarditis answered 20/4, 2015 at 18:14 Comment(2)
thanks for your input. The time for waiting app store approval is a good point. Regarding the security aspect, isn't it true that someone has to decompile your iOS app and then use the AWS credentials to access your AWS resources? Without doing so, is there anything else malicious one can do with an iOS app that embeds AWS credentials and directly interacts with AWS?Briant
Without saying too much regarding security, the easiest thing for someone to do is to just trace the network between the phone and any services or servers used.Endocarditis
C
3

In addition to the other good answers, I'd like to make an additional point: Unlike with web apps, you can't expect all users to have the latest version of your app. This means that any server url that any version of your app ever calls must in principle remain live forever. This means that if you want to change your server infrastructure down the road (e.g. migrate from AWS to some other cloud host) then you can't because even if you make an updated version of your app with new urls, then there will still be some un-updated apps out there that call the old urls.

You can of course choose to make a "forced update" mechanism in the app where you can't use it until you update (this is common in multiplayer games, but not many other places) or to just not care about a minority of users of old versions of your app whose life you make miserable (plot twist - they may be stuck on an old version of your app because their device cannot be upgraded to the latest iOS version).

But the nicer solution IMO is to hide the AWS urls behind your own servers, so you never run into this problem. It's an implementation detail that you really shouldn't leak into the client.

Cincinnati answered 28/4, 2015 at 13:6 Comment(0)
C
2

For security reasons, it is important to keep your keys in a place where they cannot be tampered with: that generally means leave them on the server.

Think of your keys this way: they grant access to your organization's resources. By putting your keys on a mobile device, theft of the keys impacts your resources at the organization level. Instead, use user-level authentication on the mobile device to grant access to AWS resource through a proxy on your servers. That way, loss of user-level credentials does not incur organization-level losses. It is easier to revoke user-level credentials.

You also mention uploads to S3. AWS has a nice facility called a presigned-post where your server generates one-time Upload credentials that your mobile device can use to upload data to S3 ... without proxying the data through your server.

# ruby
presigned_post = bucket.presigned_post(key: key, success_action_status: 201, acl: :public_read)
Coston answered 29/4, 2015 at 0:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.