HERE API rate limit header explanation - here-api

I am using the route matching HERE api.
At some point due to the large number of requests I receive a 429 error with the following headers.
X-Ratelimit-Limit:[250, 250;w=10]
X-Ratelimit-Reset:[6]
Retry-After:[6]
These are the only rate limiting related headers I receive.
I would like an explanation of the X-Ratelimit-Limit:[250, 250;w=10] header.
What does the 250 and w=10 mean?

The first number is the number of requests that you have made for the given API in the time frame.
The second section refers to the quota policy.
An example policy of 100 quota-units per minute.
100;window=60
For the current example it specifies 250 requests every 10 seconds
More details at : RFC for rate limit header

Related

How to handle throttling when using the Azure Translator Text API?

When I send too many request to the Azure Translator Text API I sometimes receive 429 Responses from the API without indication how to properly throttle the request count. I have found some documentation about throttling but it doesn't seem to apply to this specific API: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-request-limits
Does anybody know if there is a similar way to get the remaining request count or the time to wait before another request should be made? Or do I have to implement my own logic to handle throttling?
Azure Translator Text API is bit specific because the limit announced is not around the number of requests but the number of characters.
As mentioned in the documentation here, the limit depends on the type of key:
Tier / Character limit
F0: 2 million characters per hour
S1: 40 million characters per hour
S2: 40 million characters per hour
S3: 120 million characters per hour
S4: 200 million characters per hour
And I guess that there is also a (more technical) requests limit, not clearly indicated in the documentation
To be clear here the limit of microsoft translator for free tier (F0)
2,000,000 million characters per hour/month
33,300 characters per minute
10,000 characters per second/request
limit reset 60 seconds after blocked.

Alexa Skill Kit 24 Kilobyte Response Limit Change

In the Amazon Alexa developer docs it states that there is a 24 kilobyte limit on the size of the response JSON payload. I previously observed this limit being enforced but recently it seems the limit has been removed.
Does anyone know if this limit officially been removed and if so is there a new higher limit to the response size?
I'm quoting straight from the documentation:
Note the following size limitations for the response:
The outputSpeech response cannot exceed 8000 characters.
All of the text included in a card cannot exceed 8000 characters. This includes the title, content, text, and image URLs.
An image URL (smallImageUrl or largeImageUrl) cannot exceed 2000 characters.
The token included in an audioItem.stream for the AudioPlayer.Play directive cannot exceed 1024 characters.
The url included in an audioItem.stream for the AudioPlayer.Play directive cannot exceed 8000 characters.
The total size of your response cannot exceed 24 kilobytes.
If your response exceeds these limits, the Alexa service returns an error.
So, the limit you are asking is still valid.

Http maximum GET/POST query parameters supported

I read many questions about the limit of the URL in HTTP still not able to find the answer to how many parameters are maximum supported in HTTP
What is the maximum number of parameters supported in HTTP by parameters i mean:
https://www.google.com/search?q=cookies&ie=utf-8&oe=utf-8
Here there are 3 parameters:
q ie oe and their corresponding values.
The query string is under authority of RFC 3986, section 3.4 which does not specify any limit with the exception of the allowed characters. You will also struggle to find any limitation on the logical number of parameters, since there has never been a real specification on the format; what you find in there is rather a best-practice that has been highly influenced by what CGI is doing. So the number of parameters is very much bound by what the client or server is willing to transfer/accept (the lower bound wins, obviously). Per this answer, you can find a rough estimate here.
There is no limit for parameters number, its all about data size how many KBs you are sending using your GET request, however this value is configurable from web server side (Apache, Tomact, ..etc).
The default limit for the length of the request line is 8190 bytes in apache and this value could be changed to increase it or decrease it.

Response to Partial Content, if size is unknown. Range request like "bytes=100-"

How does Content-Range looks like, if I am requesting some range and the size in unknown.
For example my request is "bytes=100-200" and the stream will end at 150. But I do not know it before I start to stream. What should I send as Content-Range header?
bytes 100-/*
bytes 100-200/*
bytes 100-*/*
Or it is not a legal situation at all?
Same question if the request is open ended: "bytes=100-"
If you request a range that is satisfiable, the server should respond with a 206 (partial content) response. See RFC7233, sec. 4.1.
If the bytelength of the requested resource is smaller than the offset of the range interval, or the closing offset is beyond the resource length, the server should respond with a 416 (range not satisfiable). See section 4.4.
To skip the first 100 bytes of the content, you are indeed right in that the request should contain a Range: bytes=100- header. See sec. 2.1 and sec. 3.1.
As far as the situation goes for a resource which has unknown length and is being read in a way that yields content chunks of unpredictable size: This is undefined behaviour not sanctioned by any RFC. The Content-Range header is specified in a way that the current range or the total content size is unknown, but not both. You cannot resort to the HTTP envelope as a means of specifying the range length as a server must provide a Content-Range header when responding with a 206 code (cf. sec 4.1).
The correct way of handling the situation were:
Validating the range request
Attempting to read a sufficient amount of bytes from the requested resource
If a sufficient amount of bytes could have been retrieved, create the HTTP envelope, specify the range and attach the body. Cut off if needed,
In any other case: Respond with a 416

Elevation service UNKNOWN_ERROR

I'm having difficulty with the Google maps V3 JavaScript elevation service.
According a google groups posting ( https://groups.google.com/forum/#!msg/google-maps-js-api-v3/Z6uh9HwZD_k/G1ur1SJN7fkJ ), it appears that if you use getElevationAlongPath() it compresses and sends the entire path to the Google server as an Ajax GET request and subsamples it on their server. This means that if you have a large number of path segments the encoded URL exceeds the maximum URL length and the request fails with UNKNOWN_ERROR.
Can anyone confirm that this is a URL length issue?
I've tried doing my own subsampling along the path and sending just the points I want elevation data for as a getElevationForLocations() request. This does seem to be an improvement, but I'm still getting some UNKNOWN_ERROR responses. These occur unpredictably. Sometimes a request with 400 points returns successfully. Other requests will fail with only 300 points passed. I'm guessing that this still a problem with URL length (presuming getElevationForLocations() also sends URL-encoded data to Google).
The documentation says that "you may pass any number of multiple coordinates within an array, as long as you don't exceed the service quotas." This doesn't seem to be the case.
Does anyone have any suggestions for a reliable way to get a large number of elevation data points (500?) from a long path?
Thanks,
Colin
After a bit more digging, this seems to be the situation.
The JavaScript API for elevation uses the HTTP elevation service behind the scenes. The HTTP elevation service docs do say that requests are limited to 2048 characters. However, if you're using the HTTP service directly, you build you're own URLs. This means you can check the length before sending. If you use the JavaScript API, the URL is built for you, but the API code doesn't check the URL length before sending.
The call end-point URL and the necessary parameters take up 78 characters leaving 1970 for the encoded points.
This is where it gets messy. The number of characters in an encoded point varies with the size and precision of the lat and lng values. Generally, somewhere between 8 and 12 characters per point. An added complication is that some of the characters used in the path encoding may need URL-encoding - further increasing the number of characters needed per point by an unknown, but potentially significant amount (2 extra characters for each path character in need of URL encoding).
All of these complications mean that its theoretically possible for a call to result too long a URL with just 55 points - very, very unlikely though. A safe limit is probably 150 points (but this may still fail occasionally). 200 should work most of the time. 250 should be about the maximum.
In reality, from a small number of tests:
- 200 worked every time
- 300 usually works
- 400 sometimes works
The discrepancy between the calculation and the tests suggests that the JavaScript API may be doing some further form of compression or I've got something wrong in calcs?
Your suspicions are correct, this is a URL length issue. If you have Chrome's Developer Tools open when you submit the request you'll see an HTTP 414 (Request-URI Too Large) error. The URL is around 3000 characters which is about 1000 too many (2048 is a common max url length).
Internally the Google Maps API converts all those points to what looks like an encoded polyline which helps compress that data down, but it's clearly not enough for this really long path. It might be worth splitting the request up into multiple segments when you know your going to be including more than N points (I'd experiment around with N to see what works).

Resources