How to reset Token before expiry on bulk file upload - asynchronous

We have an Application server for migration of files, where the file count can range from 1000 to 100k to be uploaded using a rest api call.
We are feting the token with 15 min expiry time and asynchronously uploading the files to location.
10000 async file upload requests will be triggered with in 1 to 5 min but for all the upload tasks to finish it will take more than an hour so after 15 min all the requests fail with 401 unauthorized.
We are not able to update the client when the requests are in progress using below.
_client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", tk.access_token);
if we provide fresh token for every request still all requests will be triggered in 5 min hence the token will expire for async requests that are in progress after 15 min period.
please suggest if anyone has solution/ideas to try

Related

How can a server handle HTTP requests in the order in which they were sent?

I'm building a simple CRUD application where there is a client, a server, and a database.
The client can send two types of requests to the server:
POST requests that update a row in the database.
GET requests that retrieve a row in the database.
One requirement of my application is that the requests have to be handled in the order in which they were sent from the client. For example, if the user makes a POST request to update row 1 in the database, then they make a GET request to retrieve row 1 in the database, the response to GET request must reflect the changes made by the previous POST request. One issue is that there is no guarantee that the server will receive the requests in the same order that the requests went sent. Therefore, in the example above, it is possible that the server might receive the GET request first, then the POST request later, which makes the response to the GET request not able to reflect the changes made by the previous POST request.
Another requirement of the application is that the client should not have to wait for the server's response before sending another request (this constraint is to allow for faster runtime). So in the example above, one possible solution is let the client wait for the server response to the POST request, and then it can send the GET request to retrieve the updated row. This solution is not desirable under this constraint.
My current solution is as follows:
Maintain a variable on the client that keeps track of the count of all the requests a user has sent so far. And then append this variable to the request body once the request is sent. So if the user makes a POST request first, the POST request's body will contain count=1, e.g. And then if they make a GET request, the GET request's body will contain count=2. This variable can be maintained using localStorage on the client, so it guarantees that the count variable accurately reflects the order in which the request was made.
On the server side, I create a new thread every time a new request is received. Let's say that this request has count=n. This thread is locked until the request that contains count=n-1 has been completed (by another thread). This ensures that the server also completes the requests in the order maintained by the count variable, which was the order in which the request was made in the client.
However, the problem with my current solution is that once the user clears their browsing data, localStorage will also be cleared. This results in the count getting reset to 0, which makes subsequent requests be placed in a weird order.
Is there any solution to this problem?

How to increase the number of request limit per day for linkedin creative api endpoint?

Linkedin Creative API endpoint request limit per day seems to be dropped from 1 million requests to 5000 requests.
The API returns the response after the 5000 request limit is raised - HTTP-error-code: 429, Error: Resource level throttle APPLICATION DAY limit for calls to this resource is reached.
Earlier Base URL API endpoint with 1 million requests - https://api.linkedin.com/v2/adCreativesV2. Reference
Latest Base URL API endpoint with 5000 requests - https://api.linkedin.com/rest/creatives. Reference

Limit Fastapi to process only 1 request at a time and after its completion process the next request

I'm trying to limit Fastapi to process 1 request at a time. And while that request is being executed, other requests gets response that the Server is busy.
I cannot understand your requirement but you can either
Use a library such as https://github.com/long2ice/fastapi-limiter or
https://github.com/laurentS/slowapi to handle limits
Write a middleware that sets a flag in Redis or some other in-memory database, whenever a request comes, check if the flag is True. If True reject the request and if False set it to True and process the request. Set the Flag to False when the Response is sent by the server.

How does Postman know that server is offline?

I'm using Postman to debug my WebAPI.
There are 2 cases where Postman does not get any answer from my API:
1. When I set a breakpoint for incoming requests
2. When my API is not running
In 1st case, Postman waits (for inifinity theoretically), but in the other it returns me an information that something is wrong after a few seconds.
So my question is: How does that work? In the 1st case, request gets to my server, but it doesn't send any response until I stop debugging, which can take minutes possibly. In the 2nd case, Postman also does not egt any repsonse, but somehow it knows after a few seconds that it will never get it.
In first request the connect to server is successful and it waits for a reply until postman timeout defined.
Second request it connect to server and get reply immediately with an error.
You can increase or decrease max time postman wait for response by using XHR Timeout
Set how long the app should wait for a response before saying that the server isn't responding.

Max-age and 304 Not Modified Processing

I've been looking at the standards - but was not entirely sure about the following:
If we have a variant (resource, image, page etc) that is served with a cache setting of max-age=259200 (3 days) and the server is also processing ETags and last modified dates - then what will happen when the max-age is reached - but the resource has not been modified?
What I'm hoping will happen is that after 3 days - the client will request the resource again - and if it has not changed will received a 304 Not Modified response. If the cache control response (during the 304 response) also still contains max-age=259200 - then I'm hoping the client will continue to use its local cached copy and not request again for another 3 days.
What I'm afraid will happen is that once the max-age is reached - the client will no longer cache the resource - making a fresh request each time the resource is loaded - followed by a 304 Not Modified response if the resource has not been modified. i.e. we're now getting http requests for every use as opposed to using the local cache for another 3 days.
Thoughts?
It will cache for 3 more days. RFC 2616 10.3.5:
If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response.
Details about age calculation.

Resources