SIM808: Cancel a HTTP request by AT-command - http

When I send a HTTP request (AT+HTTPACTION=0) with my SIM808 it sometimes does not response with +HTTPACTION: 0,200,2. My goal is to check whether the SIM808 is still waiting for the response or is ready to send another one. Another solution would be to cancel and forgot the request. I don't care if the data wont reach the server (I'm sending them every 15 seconds).
I don't want to check error code 604 (STACK BUSSY) hence it could bring errors into my code.
Now I wait 200 seconds (long enough to TCP/IP timeout) if the +HTTPACTION: 0,200,2 didn't come and after that I send another one request.
Summary:
How to cancel HTTP request
or How to check wheter the SIM808 is still waiting for the response.
Many thanks :-)

Related

Determine when HTTP(S) POST have reached receiver without waiting for full response

I want to invoke an HTTP POST with a request body and wait until it has reached the receiver, but NOT wait for any full response if the receiving server is slow to send the response.
Is this possible at all to do reliably? It's been years since I studied the internals of TCP/IP so I don't really remember the entire state machine here.
I guess that if I simply incur a timeout of say 1 seconds and then close the socket, there's no guarantee that the request has reached the remote server. Is there any signalling at all happening when the receiving server has received the entire request, but before it starts sending its response?
In practical terms I want to call a webhook URL without having to wait for a potentially slow server implementation of that webhook - I want to make the webhook request as "fire and forget" and simply ignore the responses (even if they are intermediate errors in gateways etc and the request actually didn't reach its final destination), but I'm hesitant to simply setting a low timeout (if so, how low would be "sufficient", etc)?

HTTP code for timeout when server continues processing in the background

I stumbled upon a case where a request to an endpoint might take more than 60 seconds (let's say that's the timeout value), in which case the server sends a response and continues processing the request in the background. There are also cases where the same request would be processed before it times out and a successful response would be sent from the server to the client.
What would be the best HTTP code to use in those first case? I read HTTP server timeout. When should it be sent, which suggests 503 or 504, and HTTP status code for 'Loading', which mentions that the request can be deemed successful and return 200. But I'm not convinced by any of those suggestions more than the others yet.
No
HTTP protocol doesn't work that way.
A server would receive a request, process it and sends a reply. The cycle ends there.
HTTP is never intended to send multui-stage replies with different states. You need to work on a custom protocol built on top of HTTP if you want to do that.
Sending timeout error as an indication of an unfinished response is an anti pattern. If your server takes more time than usual to process a request, you should send a success response with an ID which can be used to poll the state of the initial request and get the results.
So to summarize from your question and comments: you have an HTTP API that takes a command and executes it, and sends a callback-reply through a webhook. If the execution takes longer than a minute, you have to send some form of reply that indicates the request is still being processed.
There are various problems with executing long-running work in an HTTP request handler. For starters, you tie up HTTP server resources (threads, sockets) while processing non-HTTP work, you can't restart the HTTP server without losing work, and so on.
So I would opt for a queuing mechanism that takes in the work, replies 200 OK or 201 Created immediately, and then schedules the work for processing on a background thread or even a different service. When finished, you execute the webhook callback.
Any error response to the initial call will leave the caller confused: they won't know whether their requested work will finish, unless you use an "exotic" status code that actually differs from real error conditions, and document that they can expect that.
Charlie and CodeCaster suggested to return 200 or 201 and I took a look at the other 2xx codes and found 202 Accepted:
From https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/202
The HyperText Transfer Protocol (HTTP) 202 Accepted response status
code indicates that the request has been accepted for processing, but
the processing has not been completed; in fact, processing may not
have started yet. The request might or might not eventually be acted
upon, as it might be disallowed when processing actually takes place.
202 is non-committal, meaning that there is no way for the HTTP to
later send an asynchronous response indicating the outcome of
processing the request. It is intended for cases where another process
or server handles the request, or for batch processing.
I wonder if this would fit best.

Can HTTP request fail half way?

I am talking about only one case here.
client sent a request to server -> server received it and returned a response -> unfortunately the response dropped.
I have only one question about this.
Is this case even possible? If it's possible then what should the response code be, or will client simply see it as read timeout?
As I want to sync status between client/server and want 100% accuracy no matter how poor the network is, the answer to this question can greatly affect the client's 'retry on failure' strategy.
Any comment is appreciated.
Yes, the situation you have described is possible and occurs regularly. It is called "packet loss". Since the packet is lost, the response never reaches the client, so no response code could possibly be received. Web browsers will display this as "Error connecting to server" or similar.
HTTP requests and responses are generally carried inside TCP packets. If a TCP packet carrying the HTTP response does not arrive in the expected time window, the request is retransmitted. The request will only be retransmitted a certain number of times before a timeout error will occur and the connection is considered broken or dead. (The number of attempts before TCP timeout can be configured on both the client and server sides.)
Is this case even possible?
Yes. It's easy to see why if you picture a physical cable between the client and the server. If I send a request down the cable to the server, and then, before the server has a chance to respond, unplug the cable, the server will receive the request, but the client will never "hear" the response.
If it's possible then what should the response code be, or will client simply see it as read timeout?
It will be a timeout. If we go back to our physical cable example, the client is sitting waiting for a response that will never come. Hopefully, it will eventually give up.
It depends on exactly what tool or library you're using how this is wrapped up, however - it might give you a specific error code for "timeout" or "network error"; it might wrap it up as some internal 5xx status code; it might raise an exception inside your code; etc.

What is the Correct HTTP Status Code for a Cancelled Request

When a TCP connection gets cancelled by the client while making a HTTP request, I'd like to stop doing any work on the server and return an empty response. What HTTP status code should such a response return?
To be consistent I would suggest 400 Bad Request now if your backend apps are capable of identifying when the client gets disconnected or if you reject or close the connection, probably you could return Nginx' non-standard code 499 or 444.
499 Client Closed Request
Used when the client has closed the request before the server could send a response.
444 No Response
Used to indicate that the server has returned no information to the client and closed the connection.
HTTP (1.0/1.1) doesn't have a means to cancel a request. All that a client can do if it no longer wants the response is to close the connection and hope that the server contains an optimization to stop working on a response that can no longer be delivered. Since the connection is now closed, no response nor status code can actually be delivered to the client and so any code you "return" is only for your own satisfaction. I'd personally pick something in the 4xx range1 since the "fault" - the reason you can no longer deliver a response - is due to the client.
HTTP 2.0 does allow an endpoint to issue END_STREAM or RST_STREAM to indicate that they are no longer interested in one stream without tearing down the whole connection. However, they're meant to just ignore any further HEADERS or DATA sent on that stream and so even though you may theoretically deliver a status code, the client is still going to completely ignore it.
1Probably 400 itself since I can't identify a more specific error that seems entirely appropriate.
There are just a few plausible choices (aside from 500, of course):
202 Accepted
You haven't finished processing, and you never will.
This is appropriate only if, in your application domain, the original requestor "expects" that not all requests will be satisfied.
409 Conflict
…between making and cancelling the request.
This is only weakly justified: your situation does not involve one client making a request based on out of date information, since the cancellation had not yet occurred.
503 Service Unavailable
The service is in fact unavailable for this one request (because it was cancelled!).
The general argument of "report an error as an error" favors 409 or 503. So 503 it is by default.
There really is little to do. Quoting from RFC 7230, section 6.5:
A client, server, or proxy MAY close the transport connection at any time.
That is happening at TCP-, not HTTP-level. Just stop processing the connection. A status code will confer little meaning here as the intent of an incomplete/broken request is mere speculation. Besides, there will be no means to transport it across to the client.

What's the fastest way to send the same http request repeatedly?

I want to send the same http request repeatedly unless I get the right response, and the server is slow, sending the request is quick, receiving the response is quick also, but waiting for server to handle the request is slow. So if I send the request, and then waiting the failure should not be acceptable.
I think of the following workflow:
1)Sending the request
2)After sending the data, start a new request to send the same request
repeat 1-2, and the response should be handled asynchronously, and when detecting the right response, it stop sending request.
How to achieve this workflow or any other workflow can solve my problem. Any language and tool which will be fast would be considerable, like C/C++.
This will cause the server to simply respond slower and slower; your first request will be the first to receive any response, all the others will be wasted CPU time and bandwidth - if you did that to my servers you'd get your IP banned automatically.
What you need to consider is
why do you need the response this fast?
can you cache the response so that re-requesting it is no longer needed
perhaps having a caching proxy between your client(s) and the server will cover your needs? (also, prefetching)

Resources