JAX-RS Get Open Connection Time - http

I'm using JAX-RS to make HTTP request. Is there a way to get the time that it takes to open a connection (as apposed to the time to get a response after the connection is already opened)?
The opening of the connection and reading the response seem to be hidden from the calling function.

Related

Signalr Calls are quite large with longPolling transport type - Performance impact

I am using SignalR - transport type longPolling. I am able to see the functionality working as expected. In realtime, I could see there are quite large number of signalR calls which is affecting performance heavily.
It seems, based on my analysis, longPolling creates connection and use it and close it. Then again the connection will be created on demand. I feel, this could be the cause of seeing lot of signalR calls at some point in time.
Could you please share your thoughts on this to resolve / avoid large number of SignalR calls?
When I tried to use foreverFrame as transport type, SignalR connection is not getting enabled. I could see the following error in console.
SignalR: Failed to connect using forever frame source, it timed out after 3000s
SignalR: Stopping forever frame
SignalR: No transport could be initialized successfully. Try specifying a different transport or none at all for auto initialization
Issue occurred when starting the Signalr Hub.
That appears to be default behavior for long polling based on the docs - Long polling does not create a persistent connection, but instead polls the server with a request that stays open until the server responds, at which point the connection closes, and a new connection is requested immediately. This may introduce some latency while the connection resets.
Your use of foreverFrame may not work because the transport is not available in the browser you are using.
I have yet to understand why anyone would force the transport to a specific one. Possibly, I have just not run into that scenario where it is required.
SignalR will handle the aspect of determining which transport to use with each client, which is one of it's great benefits.

golang http (or gocraft) rate limit

I have a server implementing a REST service using golang's net.http and gocraft.web package.
And I have a client sending JSON request via libcurl, the client is written in C++.
What I notice is that when there are a lot of client threads for sending concurrent JSON requests, there are times when the client gets "Couldn't connect to server" error, but then after a while it can resume sending again.
Since the observation points to rate-limiting, my question is where is rate-limitting happening? Is this at the client side (libcurl), Go's http package, or gocraft.web package?

"Interrupted" TCP connections

I am looking at this question: .NET creating corrupt files, specifically #CodesInChaos's comment:
Sometimes an interrupted TCP connection doesn't give any error but
just behaves like the end of the stream was reached
I searched more about this and got here: The Mysteries of Connection Close
Any HTTP client, server, or proxy can close a TCP transport connection
at any time. The connections normally are closed at the end of a
message,[18] but during error conditions, the connection may be closed
in the middle of a header line or in other strange places
...
When a client or proxy receives an HTTP response terminating in
connection close, and the actual transferred entity length doesn't
match the Content-Length (or there is no Content-Length), the receiver
should question the correctness of the length.
Q: How is it possible that a transport layer error can silently propagate up without raising an exception? I thought TCP standard includes error handling - checksums, packet sequence numbers and so on. Does anyone have any more information, for example what can be causing such errors, or steps to troubleshoot?
Edit: example 1
some of the responses contains JSON that is truncated. i.e. we get a status of 200 however the JSON is malformed and truncated
...
Also sent the details to MS support with the source code and they can reproduce on their side as well
example 2
We've got a user who is getting an error on a page due to an HTTP request getting truncated.

Does HTTP long polling support heartbeat message?

I am using HTTP long polling for pushing server events to a client.
On the client side, I send a long polling request to the server and block there waiting for a event from the server.
On the server side, we used the cometd framework (I am on the client side, do not really know much about the server side).
The problem is, after sometime, the connection is broken and the client can not detect this, so it blocks there forever. We are trying to implement some kind of heartbeat message, which will be sent every N minutes to keep the connection active. But this does not seem to work.
My question is: does HTTP long polling support heartbeat messages? As far as I understand, HTTP long polling only allows the server to send one event and will close the connection immediately thereafter. The client must reconnect and send a new request in order to receive the next event. Is it possible that the server sends heartbeat messages every N minutes while still keep the connection open until a real server event happens?
If you use the CometD framework, then it takes care of notifying the application (both on client and on server) about when the connection is broken, and it does send heartbeat messages.
What you call "HTTP long polling" is just a normal HTTP request, so in itself does not support heartbeat messages.
You can use HTTP long polling requests to implement heartbeat messages, and this is what CometD does for you under the covers.
In CometD, the response to a HTTP long poll request may deliver multiple messages, and the connection will not be closed afterwards. The client will send another HTTP long poll request without the need to reconnect, possibly reusing the previous connection.
CometD offers to your application a higher level API that is independent from the transport, so you can use WebSocket rather than HTTP, which is way more efficient, without changing a single line in your application.
You need to use the CometD libraries both on client (javascript and java) and on server, and everything will just work.

jmeter tcp response assertion

My device and socket communicate through TCP. Now i want to load test my server so i try to use JMeter.
Server and device keep the connection alive. I will need to send login message before sending any other message. And each message doesn't have any end line character but using some bit to define how long that message is.
Now when i send out my login message, server response with success code. Because the connection is keep, and there no end line character, JMeter doesn't know when will it get full response, so it wait until timeout. I even try Response assertion, using contain word but still not working.
My question is what should i do for this case so when JMeter receive some bit, for example 'SUCCESS' word from server, JMeter will understand that it already pass and keep the connection for next request.

Resources