How useful/important is REST HATEOAS (maturity level 3)? [closed]
Asked Answered
V

5

145

I'm getting involved in a project where some senior team members believe that a REST API has to be HATEOAS compliant and implement all Richardson's maturity levels (http://martinfowler.com/articles/richardsonMaturityModel.html)!

AFAIK most REST implementations are not HATEOAS compliant and there should be a good reason why more people aren't doing it. I can think of reasons like added complexity, lack of frameworks (server and client sides), and performance concern.

What do you think? Have you had any experience with HATEOAS in a real world project?

Vernissage answered 2/12, 2013 at 19:11 Comment(1)
Here is a good article on the subject: medium.com/@andreasreiser94/… Basically, the way "REST" is normally implemented, it is RPC...Persephone
E
305

Nobody in the REST community says REST is easy. HATEOAS is just one of the aspects that adds difficulty to a REST architecture.

People don't do HATEOAS for all the reasons you suggest: it's difficult. It adds complexity to both the server-side and the client (if you actually want to benefit from it).

HOWEVER, billions of people experience the benefits of REST today. Do you know what the "checkout" URL is at Amazon? I don't. Yet, I can checkout every day. Has that URL changed? I don't know it, I don't care.

Do you know who does care? Anyone who's written a screen-scraped Amazon automated client. Someone who has likely painstakingly sniffed web traffic, read HTML pages, etc. to find what links to call when and with what payloads.

And as soon as Amazon changed their internal processes and URL structure, those hard-coded clients failed -- because the links broke.

Yet, the casual web surfers were able to shop all day long with hardly a hitch.

That's REST in action, it's just augmented by the human being that is able to interpret and intuit the text-based interface, recognize a small graphic with a shopping cart, and suss out what that actually means.

Most folks writing software don't do that. Most folks writing automated clients don't care. Most folks find it easier to fix their clients when they break than engineer the application to not break in the first place. Most folks simply don't have enough clients where it matters.

If you're writing an internal API to communicate between two systems with expert tech support and IT on both sides of the traffic, who are able to communicate changes quickly, reliably, and with a schedule of change, then REST buys you nothing. You don't need it, your app isn't big enough, and it's not long-lived enough to matter.

Large sites with large user bases do have this problem. They can't just ask folks to change their client code on a whim when interacting with their systems. The server's development schedule is not the same as the client development schedule. Abrupt changes to the API are simply unacceptable to everyone involved, as it disrupts traffic and operations on both sides.

So, an operation like that would very likely benefit from HATEOAS, as it's easier to version, easier for older clients to migrate, easier to be backward compatible than not.

A client that delegates much of its workflow to the server and acts upon the results is much more robust to server changes than a client that does not.

But most folks don't need that flexibility. They're writing server code for 2 or 3 departments, it's all internal use. If it breaks, they fix it, and they've factored that into their normal operations.

Flexibility, whether from REST or anything else, breeds complexity. If you want it simple, and fast, then you don't make it flexible, you "just do it", and be done. As you add abstractions and dereferencing to systems, then stuff gets more difficult, more boilerplate, more code to test.

Much of REST fails the "you're not going to need it" bullet point. Until, of course, you do.

If you need it, then use it, and use it as it's laid out. REST is not shoving stuff back and forth over HTTP. It never has been, it's a much higher level than that.

But when you do need REST, and you do use REST, then HATEOAS is a necessity. It's part of the package and a key to what makes it work at all.

Example:- To understand it better let’s look at the below response of retrieve user with id 123 from the server (http://localhost:8080/user/123):

{
    "name": "John Doe",
    "links": [{
            "rel": "self",
            "href": "http://localhost:8080/user/123"
        },
        {
            "rel": "posts",
            "href": "http://localhost:8080/user/123/post"
        },
        {
            "rel": "address",
            "href": "http://localhost:8080/user/123/address"
        }
    ]
}
Er answered 2/12, 2013 at 19:31 Comment(11)
I feel like you should have at least a thousand more likes for this answer. Honestly, I have to imagine the: How important is it to be 'real' REST question comes up quite a bit. Hell, I was doing some googling for just that reason for ammunition to use in an upcoming meeting when I found this thread.Riocard
thanks god (or code), someone is talking about disadvantages of HATEOAS as well!Decare
Is there any other advantage then the ability to easily change URLs? You can't just add new options because unlike humans the program can't work with something new. Plus you only shifted from building knowing URLs to knowing the name of actions.Tushy
If the API consumer doesn't know anything it can only delegate user actions 1:1Tushy
Regarding the change of URLs, don't forget that your client might use cache and so, you must keep behaviour on the server to handle the previous URL as well (or just do a redirect). As any strategy to evolve APIs, you must keep your old behaviour working. HATEOAS doesn't help much there.Pentose
I think REST was an approach to make HTTP more usable and more structured. REST reduced complexity. From what I see in HATEOAS is like it sound while reading. A lot of people will hate it. Because if you have a class framework inside an app, HATEOAS just adds complexity. If an API changes and the client is not adopting it then consistency is lost. I think it's very dangerous. Either you do a web only thing or you have a stable and reliable API.Fixity
@BrunoCosta No web application lives in eternity and you can solve that easily by implementing a solution where your cached resources expire after a certain time.Parsonage
How is "not knowing the Amazon checkout URL" a product of REST? Amazon can hardcode that URL in their client all day long and you still won't have to know it. Are you talking about as an end-user? Or app developer?Analogy
@Fixity I think you are looking at this from the wrong angle. It's not adding more complexity in fact it requires less client-side duplication. Things like access writes, different states, and limiting or adding actions are handled by the API and not by the client. It allows for leaner and more flexible client-side code. If you look at what people call "REST" which is really more RPC it's much more complex and requests more logic from the client and is a lot more brittle which means even small things like an Enum change could break the client. But with HATEOAS you do not have that problem.Extension
@DusanTurajlic, sorry I didn't get it. if "Enum change" occurred - I need to reflect changes in the the client manually. I don't see how HATEOAS mitigates this, because I got a completely unexpected value in my ENUM field, and HATEOAS doesn't tell my client "run reflection to adapt to the new values".Panelist
@Panelist Well it depends on what kind of Enum change it is. Have you added a new property to the enum? You probably don't need to do anything. Depending how your clients have integrated it. If you have changed a value you can instead of changing the value introduce a new Enum and put it in a new relation. That way you can ask your clients to migrate to the new relation before you depricate the old one. This assumes you have some type of tracking. Its a different way of thinking about it, but it allows you to do more with fewer changes, and keep things backward compatible.Extension
D
24

Yes, I have had some experience with hypermedia in APIs. Here are some of the benefits:

  1. Explorable API: It may sound trivial but do not underestimate the power of an explorable API. The ability to browse around the data makes it a lot easier for the client developers to build a mental model of the API and its data structures.

  2. Inline documentation: The use of URLs as link relations can point client developers to documentation.

  3. Simple client logic: A client that simply follows URLs instead of constructing them itself, should be easier to implement and maintain.

  4. The server takes ownership of URL structures: The use of hypermedia removes the client's hard coded knowledge of the URL structures used by the server.

  5. Off loading content to other services: Hypermedia is necessary when off-loading content to other servers (a CDN for instance).

  6. Versioning with links: Hypermedia helps versioning of APIs.

  7. Multiple implementations of the same service/API: Hypermedia is a necessity when multiple implementations of the same service/API exists. A service could for instance be a blog API with resources for adding posts and comments. If the service is specified in terms of link relations instead of hard coded URLs then the same service may be instantiated multiple times at different URLs, hosted by different companies but still accessible through the same well defined set of links by one single client.

You can find an in-depth explanation of these bullet points here: http://soabits.blogspot.no/2013/12/selling-benefits-of-hypermedia.html

(there is a similar question here: https://softwareengineering.stackexchange.com/questions/149124/what-is-the-benefit-of-hypermedia-hateoas where I have given the same explanation)

Domicile answered 11/12, 2013 at 9:4 Comment(3)
Multiple implementations of the same service: can you elaborate? I don't see how it helps.Straticulate
I have tried to explain it in the text. Does it help?Launcher
7. so we have "WordPrassX" service with API installed by many companies at URL1 and URL2. I have a client for that API. with HATEOAS i need to call "URL1/posts" and then follow links. WITHOUT HATEOAS i will use "{api_url}/posts" - then "{api_url}/comments" (and so on - through all required resources). So even WITHOUT HATEOAS it is doable, and I have the SAME client application, and it works with MANY services. Please, can you elaborate the real "necessity" of HATEOAS , because many of devs still can do the same without itPanelist
G
19

Richardson's maturity level 3 is valuable and should be adopted. Jørn Wildt has already summarized some advantages and an other answer, by Wilt, complements it very well.

However, Richardson's maturity level 3 is not the same as Fielding's HATEOAS. Richardson's maturity level 3 is only about API design. Fielding's HATEOAS is about API design too, but also prescribes that the client software should not assume that a resource has a specific structure beyond the structure that is defined by the media type. This requires a very generic client, like a web browser, which doesn't have knowledge about specific websites. Since Roy Fielding has coined the term REST and has set HATEOAS as a requirement for compliance to REST, the question is: do we want to adopt HATEOAS and if not, can we still call our API RESTful or not? I think we can. Let me explain.

Suppose we have achieved HATEOAS. The client-side of the application is now very generic, but most likely, the user experience is bad, because without any knowledge of the semantics of the resources, the presentation of the resources cannot be tailored to reflect those semantics. If resource 'car' and resource 'house' have the same media type (e.g. application/json), then they will be presented to the user in the same way, for example as a table of properties (name/value pairs).

But okay, our API is really RESTful.

Now, suppose we build a second client application on top of this API. This second client violates the HATEOAS ideas and has hard-coded information about the resources. It displays a car and a house in different ways.

Can the API still be called RESTful? I think so. It is not the API's fault that one of its clients has violated HATEOAS.

I advise to build RESTful APIs, i.e. APIs for which a generic client can be implemented in theory, but in most cases, you need some hard-coded information about resources in your client in order to satisfy the usability requirements. Still, try to hard-code as little as possible, to reduce the dependencies between client and server.

I have included a section on HATEOAS in my REST implementation pattern called JAREST.

Goaltender answered 24/10, 2015 at 16:14 Comment(0)
P
12

We are building a REST level 3 API where our response is in HAL-Json. HATEOAS is great for both front and back-end but it comes with challenges. We made some customizations/additions for also managing ACL inside the HAL-Json response (which doesn't break the HAL-Json standard).

The biggest advantages to HATEOAS I see is that we do not need to write/guess any urls on our front-end application. All you need is a entry point (https://hostname) and from there on you can just browse your way through the resources using the links or templated links provided inside the response. Like that versioning can be handled easily, renaming/replacing urls, extending resources with additional relations without breaking front-end code.

Caching of resources on front-end is a piece of cake using the self links. We also push resources to clients through a socket connection, since those are also rendered in HAL we could easily add them to cache the same way.

Another advantage of using HAL-Json is that it is clear what the response model should look like, since there is a documented standard that should be followed.

One of our customizations is that we added an actions object inside the self-link object that exposes to the front end which actions or CRUD operations the authenticated user is allowed to perform on the respective resource (create:POST, read:GET, update:PUT, edit:PATCH, delete:DELETE). Like this our front end ACL is totally dictated by our REST API response, moving this responsibility fully to the the back-end model.

So to give a quick example you could have a post object in HAL-Json that looks something like this:

{
    "_links": {
        "self": {
            "href": "https://hostname/api/v1/posts/1",
            "actions": {
                "read": "GET",
                "update": "PUT",
                "delete": "DELETE"
            }
        }
    },
    "_embedded": {
        "owner": {
            "id": 1,
            "name": "John Doe",
            "email": "[email protected]",
            "_links": {
                "self": {
                    "href": "https://hostname/api/v1/users/1",
                    "actions": {
                        "read": "GET"
                    }
                }
            }
        }
    },
    "subject": "Post subject",
    "body": "Post message body"
}

Now all we have to do on front end is build a AclService with an isAllowed method that checks whether the action we want to perform is in the actions object.

Currently on front-end it looks as simple as: post.isAllowed('delete');

I think REST level 3 is great, but it can lead to some headaches. You will need to have a great understanding of REST and if you want to work with level 3 REST I would suggest to follow the REST concept strictly otherwise you will easily get lost on your way when implementing it.

In our case we have the advantage that we are building both front and back-end but in principle it should NOT make a difference. But a common pitfall I have seen in our team is that some developers try to solve front-end issues (architecture) by changing their back-end model so it "suits" the front-end needs.

Parsonage answered 17/10, 2018 at 6:39 Comment(1)
Very good answer. I think such a practical example was what the original questioner was looking for.Goaltender
T
3

I have used HATEOAS in some real projects, but with a different interpretation than Richardson. If that is what your bosses want, then I guess you should just do it. I take HATEOAS to mean that your resources should include an HTML doctype, hyperlinks to related resources and HTML forms to expose functionality for verbs other than GET. (This is when the Accept type is text/html - other content types don't require these extras.) I don't know where the belief that all REST resources in your entire application have to be glued together came from. A network application should contain multiple resources that may or may not be directly related. Or why it is believed that XML, JSON and other types need to follow this. (HATEOAS is HTML-specific.)

Trichina answered 2/12, 2013 at 20:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.