DICOM: How DIMSE Timeout work on big instances? - dicom

My understanding is that the time(out) between two consecutive DIMSE communications (request or response) is DIMSE Timeout.
So, in MWL operation, MWL SCU (establishes the connection and association and) sends the MWL CFind request. SCP should send first response before DIMSE Timeout expires. Similarly, each next response should be sent by SCP before DIMSE Timeout expires.
Similarly, for CStore operation, CStore SCU sends the request and SCP should respond before DIMSE timeout expires. This should happen for each instance sent on that association.
Is my understanding correct?
If yes, then how does this work for big size instances those take long time (more than DIMSE Timeout set) to completely transfer?
For example, CStore SCU is pushing an instance (big size multi frame lets say) which take 1000 milliseconds to transfer completely. DIMSE Timeout on both SCP and SCU is set to 500 milliseconds. What is expected result here? Will SCP or SCU encounter DIMSE Timeout?

A DIMSE C-STORE message for a big object is split into multiple P-DATA packets (PDUs, those are the TCP packets sent through the network). So actually, two timeouts apply:
The DIMSE timeout. This timeout occurs between two subsequent DIMSE messages
The PDU timeout. This occurs between two subsequent PDUs.
E.g. for the C-STORE service:
if the SCP does not send the C-STORE response timely after the request has been completely sent (i.e. all frames transferred) -> DIMSE timeout
if the SCU does not send the first PDU for the next C-STORE request timely after the response to the previous object has been sent by the SCP -> DIMSE timeout
if the timespan between 2 subsequent fragments (PDUs) sent by the SCU exceeds the SCP's expectations -> PDU timeout

I think that it's cause the action is still in progress, the system knows this and doesn't timeout, DIMSE message (in progress) are still being sent. The DIMSE timeout is for when the system doesn't get a DIMSE response for the configured time, like if the machine disappears from the network.
Also I believe most systems will be using non-blocking mode and so will not timeout in the middle of a transfer. For example you send 100 images and after 98 images the system doesn't receive a new image for 20 hours, unless a cancel operation signal is sent the system will still be waiting for those last two images.

Related

Determine when HTTP(S) POST have reached receiver without waiting for full response

I want to invoke an HTTP POST with a request body and wait until it has reached the receiver, but NOT wait for any full response if the receiving server is slow to send the response.
Is this possible at all to do reliably? It's been years since I studied the internals of TCP/IP so I don't really remember the entire state machine here.
I guess that if I simply incur a timeout of say 1 seconds and then close the socket, there's no guarantee that the request has reached the remote server. Is there any signalling at all happening when the receiving server has received the entire request, but before it starts sending its response?
In practical terms I want to call a webhook URL without having to wait for a potentially slow server implementation of that webhook - I want to make the webhook request as "fire and forget" and simply ignore the responses (even if they are intermediate errors in gateways etc and the request actually didn't reach its final destination), but I'm hesitant to simply setting a low timeout (if so, how low would be "sufficient", etc)?

What is the "retry" mechanism for nng req/rep. Are there no retries in pipe even if endpoint is tcp?

From the documentation, a req socket :
...is reliable, in that a the requester will keep retrying until a reply is received.
Specifically:
The request is resent if no reply arrives, until a reply is received or the request times out.
Q1:
This just means that a rep socket must package and send a message back to the req socket object to prevent retries, right?
However, using a lower-level reliable transport should make some guarantees about delivery even without req/rep, for example using normal nng_pair, shouldn't it?
For example, if I specify endpoints as "tcp://x.x.x.x", then shouldn't TCP itself perform reliable transport of the packets assuming sockets are connected? And, since nng_socket handles reconnects ...
When the pipe is closed, the dialer attempts to re-establish the connection. Dialers will also periodically retry a connection automatically if an attempt to connect asynchronously fails.
Q2:
... then it seems TCP+pair should be enough to ensure eventual delivery of packets?

SIM808: Cancel a HTTP request by AT-command

When I send a HTTP request (AT+HTTPACTION=0) with my SIM808 it sometimes does not response with +HTTPACTION: 0,200,2. My goal is to check whether the SIM808 is still waiting for the response or is ready to send another one. Another solution would be to cancel and forgot the request. I don't care if the data wont reach the server (I'm sending them every 15 seconds).
I don't want to check error code 604 (STACK BUSSY) hence it could bring errors into my code.
Now I wait 200 seconds (long enough to TCP/IP timeout) if the +HTTPACTION: 0,200,2 didn't come and after that I send another one request.
Summary:
How to cancel HTTP request
or How to check wheter the SIM808 is still waiting for the response.
Many thanks :-)

Tcp latency analysis in wireshark

I wanted to know what all factors do I need to check while analysing latency issue on the firewall Wireshark's capture.?
I know about timestamps (before previous packet reached).. But nothing after than that
If you are talking about latency of HTTP transaction, you can consider 3 aspects:
roundtrip time, typically it's the time from your HTTP request to the TCP ACK for the request
Initial response time: that's the time between your HTTP request and first packet in the HTTP response.
Total response time: that's the time between your HTTP request and last packet of HTTP response (Wireshark will tell you the last packet of response since that's when you see the full http response)
Good luck.

Relation between HTTP Keep Alive duration and TCP timeout duration

I am trying to understand the relation between TCP/IP and HTTP timeout values. Are these two timeout values different or same? Most Web servers allow users to set the HTTP Keep Alive timeout value through some configuration. How is this value used by the Web servers? is this value just set on the underlying TCP/IP socket i.e is the HTTP Keep Alive timeout and TCP/IP Keep Alive Timeout same? or are they treated differently?
My understanding is (maybe incorrect):
The Web server uses the default timeout on the underlying TCP socket (i.e. indefinite) regardless of the configured HTTP Keep Alive timeout and creates a Worker thread that counts down the specified HTTP timeout interval. When the Worker thread hits zero, it closes the connection.
EDIT:
My question is about the relation or difference between the two timeout durations i.e. what will happen when HTTP keep-alive timeout duration and the timeout on the Socket (SO_TIMEOUT) which the Web server uses is different? should I even worry about these two being same or not?
An open TCP socket does not require any communication whatsoever between the two parties (let's call them Alice and Bob) unless actual data is being sent. If Alice has received acknowledgments for all the data she's sent to Bob, there's no way she can distinguish among the following cases:
Bob has been unplugged, or is otherwise inaccessible to Alice.
Bob has been rebooted, or otherwise forgotten about the open TCP socket he'd established with Alice.
Bob is connected to Alice, and knows he has an open connection, but doesn't have anything he wants to say.
If Alice hasn't heard from Bob in awhile and wants to distinguish among the above conditions, she can resend her last byte of data, wrapped in a suitable TCP frame to be recognizable as a retransmission, essentially pretending she hasn't heard the acknowledgment. If Bob is unplugged, she'll hear nothing back, even if she repeatedly sends the packet over a period of many seconds. If Bob has rebooted or forgotten the connection, he will immediately respond saying the connection is invalid. If Bob is happy with the connection and simply has nothing to say, he'll respond with an acknowledgment of the retransmission.
The Timeout indicates how long Alice is willing to wait for a response when she sends a packet which demands a reply. The Keepalive time indicates how much time she should allow to lapse before she retransmits her last bit of data and demands an acknowledgment. If Bob goes missing, the sum of the Keepalive and Timeout values will indicate the worst-case time between Alice receiving her last bit of data and her deciding that Bob is dead.
They're two separate mechanisms; the name is a coincidence.
HTTP keep-alive (also known as persistent connections) is keeping the TCP socket open so that another request can be made without setting up a new connection.
TCP keep-alive is a periodic check to make sure that the connection is still up and functioning. It's often used to assure that a NAT box (e.g., a DSL router) doesn't "forget" the mapping between an internal and external ip/port.
KeepAliveTimeout Directive
Description: Amount of time the server will wait for subsequent
requests on a persistent connection Syntax: KeepAliveTimeout seconds
Default: KeepAliveTimeout 15 Context: server config, virtual host
Status: Core Module: core The number of seconds Apache will wait for a
subsequent request before closing the connection. Once a request has
been received, the timeout value specified by the Timeout directive
applies.
Setting KeepAliveTimeout to a high value may cause performance
problems in heavily loaded servers. The higher the timeout, the more
server processes will be kept occupied waiting on connections with
idle clients.
In a name-based virtual host context, the value of the first defined
virtual host (the default host) in a set of NameVirtualHost will be
used. The other values will be ignored.
TimeOut Directive
Description: Amount of time the server will wait for certain events
before failing a request Syntax: TimeOut seconds Default: TimeOut 300
Context: server config, virtual host Status: Core Module: core The
TimeOut directive currently defines the amount of time Apache will
wait for three things:
The total amount of time it takes to receive a GET request. The amount
of time between receipt of TCP packets on a POST or PUT request. The
amount of time between ACKs on transmissions of TCP packets in
responses. We plan on making these separately configurable at some
point down the road. The timer used to default to 1200 before 1.2, but
has been lowered to 300 which is still far more than necessary in most
situations. It is not set any lower by default because there may still
be odd places in the code where the timer is not reset when a packet
is sent.

Resources