For JBoss's AccessLogValve, we have activated the optional field,
%D - Time taken to process the request, in millis
This has been super-valuable in showing us which responses took the most time. I now need details on how this field is calculated. Specifically, when does JBoss stop the "time taken to process..." clock? For example, does JBoss stop it when the [first segment of the] response is given to the TCP/IP stack to transmit to the client? Or when the client sends an [ACK] following the last segment? Or when a network timeout occurs?
Knowing this will help me find root causes for some crazy-long response times in our production access logs.
Related
I'm troubleshooting a slowness in my web API application. The issue I find is:
- a packet arrives containing the HTTP POST with Headers only.
- Then time passes (sometimes very long ~30 sec, sometimes milliseconds).
- another packet arrives containing the payload.
The split is always between headers and payload.
This causes the processing of the request in the app to be delayed by this interval.
Is it normal a packet will be split like that?
Have you encountered it?
Is it possible an AWS ELBv1 is somehow doing that?
Any direction will help - I'm totally confused...
I have a static one-way port configured for HTTP which sends XML documents to an external website. It's been working fine for over a year but lately it has been throwing errors
The HTTP send adapter cannot complete the transmission within the specified time. Destination: https://xyz.example.com
I've tried extending the timeout on the send port and the errors keep happening. The vendor says there have been no changes on their end, I have made no changes to the server and the network team says no changes have been made either.
I've tested the interface with PostMan and it works every time I try it.
Resuming the messages does nothing as I get the same error. What I've noticed is that if I reset the host instance then the messages start flowing.
Any clues?
Maybe you have a lot of HTTP request and the Outbound Throttling is activated? Check the performance counter Message delivery throttling state associated with BizTalk:MessageAgent performance object category to measure the current throttling state and see if it is different from 0.
Host Throttling Performance Counters
How BizTalk Server Implements Host Throttling
On trace 3, the value is 1, then "Throttling due to imbalanced message delivery rate". This means that "Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent)". The send port can't send messages as fast it receives new ones.
You can check both rates with performance counters:
You can increment the "Rate overdrive factor (percent)" at the Host properties to allow more load, by default is 125 (input rate can be as much 25% above output rate, then it begin to Throttling):
Or adjust the Sampling Window Duration or Minimum Number of Samples. It depends of the behaviour of your load.
Can I measure the response time for the http/https website using the wireshark packet. most of the website/blog only show how to check http response time only, if I want to know the HTTPS response time, how is it?thank you.
Configure Wireshark to decrypt SSL, and then measure the response time as with HTTP (i.e., by subtracting the packet times). One easy way to decrypt SSL traffic is to configure your browser to save pre-master secrets to a log file and configuring Wireshark to look for secrets in that log file. As an example, to configure Chrome, you set the environment variable SSLKEYLOGFILE to the full path of the log file and restart any Chrome processes (including background processes). Then in Wireshark, open Preferences >> Protocols >> SSL and point the Pre-Master-Secret log at the same file. There is a more detailed walk through at: https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
To get the response time, find a packet from the request/response conversation. Some of those packets should be highlighted in green with a protocol of "HTTP" if it was successfully decrypted. Right click on one of the packets and select "Follow >> SSL Stream". This should filter all the packets in the main window, limiting them to the TCP stream of interest. From there, you can scroll to find the last packet from the request, the first packet from the response, and the last packet from the response. Then, depending on what you mean by response time, just subtract the two times which cover that. For example, if you want the time from when the request was sent to the time when the response started, just subtract the time of the last request packet from the time of the first response packet.
You should also be able to use the other websites you referred to in your question to get the response timing. The process is essentially the same once you have the SSL stream decrypted.
We have a shell script setup on one Unix box (A) that remotely calls a web service deployed on another box (B). On A we just have the scripts, configurations and the Jar file needed for the classpath.
After the batch job is kicked off, the control is passed over from A to B for the transactions to happen on B. Usually the processing is finished on B in less than an hour, but in some cases (when we receive larger data for processing) the process continues for more than an hour. In those cases the firewall tears down the connection between the 2 hosts after an inactivity of 1 hour. Thus, the control is never returned back from B to A and we are not notified that the batch job has ended.
To tackle this, our network team has suggested to implement keep-alives at the application level.
My question is - where should I implement those and how? Will that be in the web service code or some parameters passed from the shell script or something else? Tried to google around but could not find much.
You basically send an application level message and wait for a response to it. That is, your applications must support sending, receiving and replying to those heart-beat messages. See FIX Heartbeat message for example:
The Heartbeat monitors the status of the communication link and identifies when the last of a string of messages was not received.
When either end of a FIX connection has not sent any data for [HeartBtInt] seconds, it will transmit a Heartbeat message. When either end of the connection has not received any data for (HeartBtInt + "some reasonable transmission time") seconds, it will transmit a Test Request message. If there is still no Heartbeat message received after (HeartBtInt + "some reasonable transmission time") seconds then the connection should be considered lost and corrective action be initiated....
Additionally, the message you send should include a local timestamp and the reply to this message should contain that same timestamp. This allows you to measure the application-to-application round-trip time.
Also, some NAT's close your TCP connection after N minutes of inactivity (e.g. after 30 minutes). Sending heart-beat messages allows you to keep a connection up for as long as required.
I am trying to measure performance of the server side code. Looking at the request from the client, Fiddler gives me the following view:
Fiddler's documentation states: The vertical line indicates the time to first byte of the server's response (Timers.ServerBeginResponse).
Does that mean the time of servers TCP response (e.g. ACK) or does it mean to tell me that the server has compiled all the data in less than half a second and took about 5 seconds to transfer it?
TTFB is the time from the moment the request is fired to getting the first byte back from the server as a response. It includes all the steps for that to happen.
It is the duration from the virtual user making an HTTP request to the
first byte of the page being received by the browser. This time is
made up of the socket connection time, the time taken to send the HTTP
request and the time to taken to get the first byte of the page.
So yes less than 1/2 second to respond, then 5secs to transfer.