This is a somewhat simple question, but sadly I have not been able to find a concrete answer thus far.
We are constructing an API (we're not in production yet) which returns a large amount of data after user authentication, etc. The API system tracks the user's usage on a per second and per hour basis. When the user exceeds either of those limitations, the server returns no content and some http error code.
Presently, I'm using 406 Not Acceptable, but I don't believe that's the best code to use. Its been suggested that 509 Bandwidth Limit Exceeded would be a good one, but I wonder if there is a code which would be considered best practice for my situation. Thank you in advance for your help!
Status code 429 comes to mind:
RFC 6585, section 4: 429 Too Many Requests
The 429 status code indicates that the user has sent too many
requests in a given amount of time ("rate limiting").
The response representations SHOULD include details explaining the
condition, and MAY include a Retry-After header indicating how long
to wait before making a new request.
Well, since you've found no applicable error code, I'd guess there isn't one. In this situation, if I were you, I'd use stick with your 406 or anything like that, just decide on something and keep using it. The browser doesn't care anyway and the API's are used by people that will accept whatever code you return and deduce it's a rule - "if I exceed the usage, I get 406". I think it doesn't really matter what the magic number is.
Related
Let's say we have an API with a route /foo/<id> that represents an instance of an object like this:
class Foo:
bar: Optional[Bar]
name: str
...
class Bar:
...
(Example in Python just because it's convenient, this is about the HTTP layer rather than the application logic.)
We want to expose full serialized Foo instances (which may have many other attributes) under /foo/<id>, but, for the sake of efficiency, we also want to expose /foo/<id>/bar to give us just the .bar attribute of the given Foo.
It feels strange to me to use 404 as the status response when bar is None here, since that's the same status code you'd get if you requested some arbitrarily incorrect route like /random/gibberish, too; if we were to have automatic handling of 404 status in our client-side layer, it would be misinterpreting this with likely explanations such as "we forgot to log in" or "the client-side URL routing was wrong".
However, 200 with a response-body of null (if we're serializing using JSON) feels odd as well, because the presence or absence of the entity at the given endpoint is usually communicated via a status rather than in-line in the body. Would 204 with an empty response-body be the right thing to say here? Is a 404 the right way to go, and if so, what's the right way for the server to communicate nuances like "but that was a totally expected and correct route" or "actually the foo-ID you specified was incorrect, this isn't missing because the attribute was un-set".
What are the advantages and disadvantages of representing the missing-ness of this attribute in different ways?
I wonder if you could more clearly articulate why a 200 with a null response body is odd. I think it communicates exactly what you want, as long as you're not trying to differentiate between a given Foo not having a bar (e.g. Foo.has_key?(bar)) and Foo having a bar explicitly set to null.
Of 404, https://developer.mozilla.com says,
In an API, this can also mean that the endpoint is valid but the resource itself does not exist.
so I think it's acceptable. 204 doesn't strike me as particularly outlandish in this situation, but is more commonly associated (IME, at least) with DELETEs (and occasionally PUTs/POSTs that don't return results.)
I also struggle a lot with this because:
404 can point to a non existent url, or a path that is acceptable
but the particular referenced resource does not exist. I have also
used it to error out on request body's that carry identifiers that
are non existent.
A lot of people shoe-horn these errors into the bad request (400)
error code which is somewhat acceptable but also a cop out.
(Literally anything the server did not process successfully can be classified as a bad request, if you
think about it)
With 2(above) in mind, a 400 with some helpful message body is
sometimes used to wash out the guilt of not committing outrightly to
a 404, but this demands some parsing expectations on the client's
side, which is not always nice. Also returning a 400 which,
according to this is kind of gaslighting the client, because 400
errors are supposed to be the client's fault entirely with regard to the structure of the request, not because the client asked for something not in your db.
400 Bad Request response status code indicates that the server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing).
The general feeling is that 200 means all is good, and therefore
there's always a tacit expectation the response will always contain
some form of body, not null.(Right??) I wouldn't encourage using a 200 for
these situations. While 204's don't carry the responsibility having to carry a response body, they also sort of convey the message that "something worked", which is not the message you want to send here, right?
What I'm trying to say? Thoughtful API design is hard.
In a change password request, if the old password is not right, what would be the correct response?
I'm thinking 401 Unauthorized? Or is it 400 Bad Request?
It's a bit tricky because I think you can make the argument for either, and I also feel that 409 and 422 could be argued.
Ultimately I think that it's important to use a more specific HTTP status code, if a generic client can do something useful with the response. Because of this, I think it doesn't really matter in this case.
I think I would be tempted to not use 401, because I associate that close to the Authorization header, and you're probably not using it in this case.
422or 400 are the best. This is entirely based in opinion. Either of those indicate that there was something wrong with the request (422 a bit more specific: there's nothing wrong with the format, but there is something wrong with the actual values sent to the server).
409 is sometimes used to indicate that the current request is valid, but the current state of the server prevents it from being successful. Given that the current state of the server is "the current password was something else", 409 could be appropriate.
Ultimately I don't think any is really wrong, and it's not really important but my vote would go to 422 first, and 400 next.
Some sources (first few are mine, last link is from one of the authors of the http specification)
https://evertpot.com/http/400-bad-request
https://evertpot.com/http/422-unprocessable-entity
https://evertpot.com/http/409-confict
https://www.mnot.net/blog/2017/05/11/status_codes
The answer is the same as it would be for a login page.
Now, what is that answer? There's some disagreement on that question, but in my opinion it's 200. I discuss that in this answer and link to some other opinions.
I'm not sure if this is a pure stackoverflow relevant question. It is related to general design practice. Since I cannot think of another relevant stack exchange site, posting it here.
In the general design practice of converting an async call to sync one, we use a time-out and wait for the results. While, this may not exactly a good practice from the point of view of responsiveness, it definitely makes the implementation easier.
I have seen many such implementations and often noticed that the developers tend to give a very small time-out value. I can understand that the people may have the need of a responsive system in mind when they did this. But many of these applications I have seen are very data critical ones where the loss of data is very bad. So, it is always better to wait more and try to get as much data instead of timing out early and giving an error message to the user. Now, the situations where the server failing to give data or the client unable to reach server etc are rare. In those situations, I expect the a large time-out for such waits. After all, these time-outs don't mean that the wait will definitely last until the given time-out value; the timeout value is only an upper limit. So, I have always arguing for higher values here. But I see the use of low values in more and more places and now I'm getting confused if really there is something else in this practice that I don't understand.
So, my question is : Are there any arguments, other than the need for responsiveness to implement a very small time-out for waiting?
As always, the right decision depends on the real-life data.
The timeout should be proportional to the time it usually takes to complete an operation successfully.
Sending a UDP message for example could take between 1 - 50 milliseconds so a timeout of 100 milliseconds is more than reasonable however copying a file over the wire could take minutes or more so a 100 millisecond timeout is laughable.
There are pros and cons to both short and long timeouts so it's a tradeoff. Longer timeouts use more resources (tasks, threads, memory, etc.) for the same amount of work while short timeouts, as you mentioned, may result in loss of data.
In conclusion, you need to set a configurable timeout that sounds reasonable and then figure out whether you timeout too many operations in production or the other way around and calibrate accordingly.
Recently, I ran into a problem with my application: the size of the JSON string returned from the server was exceeding the default maxJsonLength. I've done some research and implemented some fixes including a variation of paging. Everything looks great at the moment. However, I still have some questions unanswered.
First of all, the majority of the sources point to this article:
http://geekswithblogs.net/frankw/archive/2008/08/05/how-to-configure-maxjsonlength-in-asp.net-ajax-applications.aspx
1. Why 2,097,152 (2MB)? 2MB is way too much data to be loaded for a web page. (Unless, the user is downloading something, but that's a different story) Even 1MB is too much.
2. Than, the author goes on with an example of maxJsonLength of 500,000. Why this number? Is this just an example of how to set the property? Some sources state that 500,000 is the limit. Well, it's not, because I tested my application with 2,097,152 (2MB, roughly 4 times the 500,000) and it worked.
3. Some other sources state that 4MB is the limit... So, what is the limit? Is there a limit? Does it have something to do with the limit of the response from the server?
4. Finally, I'd like to get a strong suggestion on a length of JSON string being received from the server. Not the number to which maxJsonLength should be set, but the actual length of the JSON string, kind of "what to strive for".
Thank you in advance.
There is no hard and fast rule here. Your Json length is going to depend on your application and what information you are returning to the client.
If you really want a "rule of thumb", it should be as SMALL as possible to communicate the data that you need.
For max values, the true limitation is again going to depend most likely on browser requirements, but I personally would never go with more than 2mb for a Json message simply due to what it would take to send that down.
I understand that the total limit is determined by the lesser of the maxJsonLength that you have mentioned and the HttpRuntimeSection.MaxRequestLength. I am currently testing this and I will get back to you.
Of course, the big issue here is that it is seldom a good idea to return such large amounts of data. Whenever I have a response that starts to exceed about 100KB, I take another look at
my overall design and find ways to serve out smaller chunks as they are needed. Even this 100KB is high for must pure data scenarios, by which I mean textual data, not images or scripts.
In the Seam Reference Guide, one can find this paragraph:
We can set a sensible default for the concurrent request timeout (in ms) in components.xml:
<core:manager concurrent-request-timeout="500" />
However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access.
In our application we have a combination of page scoped ajax requests (triggered by various user actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages.
Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site.
After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment.
And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option.
Any other suggestions?
TIA,
Andrei
We use 60000 or 120000 (1-2 minutes). Concurrent-request-timeout is designed to avoid deadlocks. Historically we have far more problems with timeouts than deadlocks. A better approach is to use a client-side queue (<a4j:ajaxQueue> if using RichFaces) to serialize and remove duplicate requests as much as possible, then set the timeout high enough to avoid any remaining problems.
There are many serious issues resulting from Seam's concurrent request timeouts:
The issue is the last request gets the ConcurrentRequestTimeoutException. If the user double-clicks or reloads the page, only the last request matters -- why should he get an error?
Usually the ConcurrentRequestTimeoutException is suppressed, and only secondary NullPointerExceptions and #In injection failures are shown, making debugging difficult.
Seam 2.2.1 has a severe problem where transactions, ThreadLocals, and locks may leak after a timeout occurs, especially when used with <spring:spring-transaction/>. Look at SeamPhaseListener.afterRestoreView: there's no finally block to clean up after restoreConversation fails!
In my opinion there are many poor aspects to this design, so it's best to use a much higher timeout and try to avoid the issues.
This is what we have and it works fine for us:
<core:manager concurrent-request-timeout="5000"
conversation-timeout="120000" conversation-id-parameter="cid"
parent-conversation-id-parameter="pid" />
We also use a much higher value for the concurrent-request-timeout.
At least for duplicate events you can use settings in the a4j components to filter and delay them with eventsQueue, requestDelay and ignoreDupResponses=”true”.
(Last point http://docs.jboss.org/seam/2.0.1.GA/reference/en/html/conversations.html )
Can you analyse which types of request are taking a long time? Is there a particular type which you could reduce the request time by doing the "work" asynchronously and getting the update back in your poll?
In my opinion, ajax requests should always complete fairly quickly, then you can calculate a max concurrent request time by (request time * max number of requests likely to be initiated)