I'm sending push notifications to Mozilla's Push Service:
https://updates.push.services.mozilla.com/wpush/v1/...
This is working really well for a long time now, but since two weeks I get an 413 - Request Entity Too Large for one (and only this one) of the consumers.
I searched the web for that error message, but all I found was the limit of 4KB for most of the push services. But the payload I am sending is much smaller:
{
"Titel": "New calendar entry from subdomain.domain.com",
"Text": "A new entry has been made by firstname lastname in the calendar your-calendar-name on 2021/02/03.",
"Icon": "https:\/\/subdomain.domain.com\/version\/webapp\/icon192.png",
"URL": "https:\/\/subdomain.domain.com\/calendar\/event\/15578"
}
So my question is: what can cause this Request Entity Too Large error when I'm sending a payload that's under 4KB?
I came across something very like this problem today. The cause of the problem was Firefox for Android, which behaves differently from Firefox for desktop.
URLs for the desktop version are like this: https://updates.push.services.mozilla.com/wpush/v2/[something]
The android version URLs are like this: https://updates.push.services.mozilla.com/wpush/v1/[something]
Note v2 as opposed to v1
The requests to the v1 endpoint were all returning with 413 Request Entity Too Large
The difference is that the desktop version accepts the standard 4078 bytes, but the Android version seems to have a lower limit (possibly 3052 bytes).
I use minishlink's php library for sending push notifications, and I found some information about this here: https://github.com/web-push-libs/web-push-php#payload-length-security-and-performance
This document says that the default is compatible with Firefox for Android (3052 bytes), but in practice I could only get it to work when I added the line
$webPush->setAutomaticPadding(false);
to my code. So if you happen to be using a library that pads the payload for security reasons, that might be your problem.
For more discussion, see https://github.com/web-push-libs/web-push-php/issues/108, which suggests that 2847 bytes, not 3052, is the actual limit in practice for payload to Firefox for Android.
Related
I've some difficulties with an ASP.Net Core Web Api Application which is hosted on an IIS 8.5.
The IIS 8.5 returns a 400 status code for a specific post request.
The faulty request is executed by an web application which is hosted on the same domain with a different port. The API is configured to handle cors and the preflight of the faulty request is successfully completed.
I noticed a weird thing:
The Api is deployed with Swagger UI included. So I tried to reproduce the error with the Swagger UI. But in this case the request is successful.
The body and the url of both requests are absolutely the same and there are no noticeable differences in the headers except, of course, of the request origin.
It looks like the request is not processed by the Api at all (I should see sth. in our log files in this case), so I'm pretty sure the error occurs somewhere in the IIS itself.
I've already investigated the httperr.log file. It contains the flowing line at the time of the failed request:
2018-12-05 15:38:36 192.168.100.132 62121 192.168.100.173 1142
HTTP/1.1 POST /api/some/request/path 400 13 BadRequest myServicePool
I was hoping this file would contain more details about the cause of the error.
I was wondering if the "13" before "BadRequest" has any special meaning?
Does anyone have an idea, based on the information given, why this error occurs? I don't really think so. But I would be more than happy if anybody can give me a hint where to search for more details about the cause of the error.
Let me know if you need more details.
It's better if we can have sample code of how you are sending the request in your code.
However, with the given facts I assume the problem is in the content of the request body. Even the swagger request and the request you are sending look like exactly the same, it should be varying in some aspects.
Are you using a JSON converter? If you are using a JSON converter and if you are serializing a .NET model to a JSON string and attaching in the request please make sure that you are formatting it with Camel Case.
Because by default it might just be converting the .NET model as it is with the Pascal case.
EXAMPLE
I'll elaborate this using Newtonsoft JSON library.
.NET model serialized without specifying the format
var businessLeadJson = JsonConvert.SerializeObject(ObjectA);
Converted result - {"Company":"sample","ContactName":"contact 1"}
.NET model serialized by specifying the format
var businessLeadJson = JsonConvert.SerializeObject(businessLead, new
JsonSerializerSettings() { ContractResolver = new CamelCasePropertyNamesContractResolver() });
Converted result - {"company":"sample","contactName":"contact 1"}
Please notice the case of the property names in JSON strings. The first letter is capital in the first result.
Therefore I recommend you to try serializing your objects that are attaching as the payload (request body) by specifying the formatting, becasue REST APIs expect the JSON strings in the correct format.
Please specify the Camel case formatting when you are serializing the object of your request body.
Good Luck..!
I've justed managed to reproduce this error by accident.
The problem is, that the application send an empty Authorization-Header if the user hasn't login yet.
It seems that causes an Bad Request on some configurations/IIS versions, or what ever the difference is between the systems,and on some it's no problem.
We're using LinkedIn API for a very simple case:
authenticate (either through SDK or web view)
use token to fetch profile
Today we have noticed a huge degradation of service. Simple profile calls response time is in the range of 500ms - 12000ms and sometimes it returns 500.
Slow response can be easily reproduced by using the REST console from developers guide:
https://apigee.com/console/linkedin
with more than half of the requests taking more than 10sec to respond.
We also noticed tha sometimes requests return 500 from LinkedIn:
<-- 500 https://api.linkedin.com/v1/people/~:(id,first-name,last-name,email-address,formatted-name,headline,location,industry,num-connections,num-connections-capped,summary,specialties,positions,phone-numbers,public-profile-url,picture-url,picture-urls::(original))?format=json (17808ms)
{
"errorCode": 0,
"message": "Internal API server error",
"requestId": "3FXP4W2HD2",
"status": 500,
"timestamp": 1519643223202
}
<-- END HTTP (138-byte body)
This looks like a timeout on server side maybe at some point, hence failure.
Given that sometimes the API responds quickly, it does feel like one of the servers outage maybe? (I might be wrong).
Would be great to know if:
there is a status page for the LinkedIn API that can tell us about the problems and how / when they are getting addressed.
can we treat V1 API as reliable at all? There was a similar (but easier to workaround) problem with fetching profile just couple of weeks ago.
I'm a student and I'm taking my first networking class. I'm working on an assignment designed to get me used to using Wireshark and understanding packet transfers. Part of the assignment is to collect some data about a certain GET request, but my Wireshark isn't showing anything related to GET or POST requests.
I've discussed this with my instructor and he can't figure it out, either. I've tried uninstalling/reinstalling Wireshark, but haven't gotten anything different.
Here's what I'm getting when I should be getting GET data:
26030 1157.859131000 128.119.245.12 10.0.0.7 HTTP 564 HTTP/1.1 404 Not Found (text/html)
This is the first packet I get after connecting to the server (this comes from right-click "copy"). From what I've gathered from the assignment instructions and the instructor, this should get a GET request. Anyone have any ideas?
You can use a browser plugin like firebug to examine actual request and response headers being exchanged. Sometimes due to page caching actual document may not be refetched after only headers like if modified since since being exchanged or the browsers cached version has not expired.
I'm currently working on a project that needs to request a url multiple times. Having studied the the HTTP Proxy (Charles) it seems that AIR will cache the first response and then return the same response for each subsequent request.
Does anybody know how to know if the response has been cached other than setting the URLRequest to useCache, but this doesn't say if the response was a cached response or not. The digest isn't set on the URLRequest either, although it does mention this is for swz only, so how does it know if the content is the current content or not? Is the responseHeaders used to find out how long to hold the cache i.e.
Cache-Control: max-age=900
Also does anyone know how to flush/purge the cache or are we at the whim of the GC and in that case how does it know if to leave it in the cache or now?
This makes sense to me, but still I would like to know how to regulate this cache.
Further more: I've tested a set up where parallel URLLoaders (10) are made and created which open the same url to see what happens in that instance. It seems that each parallel request is made until a successful response is given, all subsequent calls are then cached. Calls which are sent out before the successful request is then completed. It looks like the items which are already in being processed do not use the cache and return with correct data.
Additional The AIR runtime doesn't even send a "If-Modified-Since" header, so the cache isn't even honoring HTTP protocol. So it seems as if Adobe has implemented it's own version of a cache which doesn't even use HTTP/1.1 Header Field Definitions. Perfect.
Thanks for any help.
Simon
From the documentation of URLRequest class, it seems that it uses Operating System's HTTP Cache. On a windows 7 OS, it seems that it is using IE's cache.
You can use a HTTP monitor tool like Fiddler, and verify this.
First request is 200, and subsequent requests are 304. After clearing the IE cache. and run the application again, You can see that it results in a HTTP 200 status.
I am implementing a RESTful web service that accesses a database. Entities in the database are versioned to detect multiple updates. For instance, if the current value is {"name":"Bill", "comment":"tinker", "version":3}, if one user PUTs {"name":"Bill", "comment":"tailor", "version":3}, the request will succeed (200 OK) and the new value will be {"name":"Bill", "comment":"tailor", "version":4}. If a second user PUTs {"name":"Bill", "comment":"sailor", "version":3"} that request will fail (409 Conflict) because the version number does not match.
There are existing non-RESTful interfaces, so the design of the databases cannot be changed. The RESTful interface calls an existing interface that handles the details of checking the version.
A rule of thumb in RESTful web services is to follow the details of HTTP whenever possible. Would it be better in this case to use a conditional header in the request and return 412 Precondition Failed if the version does not match? The appropriate header appears to be If-Match. This header takes an ETag (Entity Tag) which could be a hash of the representation of the current state of the resource.
If I did this, the ETags would be for appearances' sake, because the version would still be the real thing I'm testing for.
Is there any reason I should do this, other than "making it more RESTful", whatever that is supposed to mean?
The appropriate thing to do is always to follow the HTTP spec if you're using HTTP, and the reason is simply to allow people who understand the spec to function correctly.
412 should only be used if a precondition (e.g. If-Match) caused the version matching to fail, whereas 409 should be used if the entity would cause a conflict (the HTTP spec itself alludes to this behaviour in the definition of 409).
Therefore, a client that doesn't send ETags won't be expecting a 412. Conversely, a client that does send ETags won't understand that it's ETags that are causing a 409.
I would stick with one way. You say that "the database schema can't change", but that doesn't stop you (right in the HTTP server layer) to extract the version from the datbase representation and put it in the ETag, and then on the way in, take the If-Match header and put it back in the version field.
But doing it completely in the entity body itself isn't forbidden. It just requires you to explain the concept and how it works, whereas with the ETag solution you can just point people to the HTTP spec.
Edit: And the version flag doesn't have to be a hash of the current resource; a version is quite acceptable. ETag: "3" is a perfectly valid ETag.