Forcing a Biztalk Host to Throttle for Debugging Purposes - biztalk

we're currently having issue in our production servers and would like to try to replicate the issue in our dev. I'm currently awaiting access to our Performance Monitoring Tool, and while waiting would like to play with it a little.
I'm thinking of, since I suspect a host throttling in prod, forcing hosts to throttle in dev and see if it will recreate the issue.
Is there a way to do this?

As others have mentioned, monitoring of the throttling counters and other counters like memory and WIP messages is a must to see what is going on in your production server. Also would recommend that set up a SCOM alert on throttling states of 3+ (publishing + delivery states), if you have SCOM.
Message throughput can grind to a halt on especially the memory (4, 5) and Queue Size (6) states. States 1+2 are generally short lived (e.g. arrival of a large batch of messages) and Biztalk recovers within a few seconds.
Simulating the memory state in your Dev environment should be straightforward by tweaking the throttling thresholds (obviously not something to be taken lightly in production!)
e.g. to trigger the Memory threshold states - AFAIK the lowest memory usage threshold you can set is 101MB. Running a load test in dev should then be able reproduce the throttle.
There is also apparantly a user-based throttling override to set states 10 and 11 although haven't actually tried this.
Some other experience on avoiding throttling:
(Caveat - I don't have an active BizTalk 2006/R2 setup - this is for 2009 / 2010)
If you do a lot of asynchronous processing (e.g. Queue receives), ensure that you have split functionality into separate Hosts for Receive, Processing and Send hosts. This way you can adjust the throttling for asynch Receive hosts to trigger much earlier than the processing and sending hosts - this should have the effect of constricting new incoming messages to the messagebox but allowing existing messages to complete processing.
On 64 bit hosts, the default 25% memory host usage throttling level is usually an unnecessary liability - we increased this using Yossi Dahan's recommendation of 50% on a 4GB server
Note that suspended messages count toward throttling state 6 - ensure that you have a strategy for dealing with suspended messages (and obviously ensure that the Sql Agent jobs are running!).

Related

Queue system recommendation approach

We have a bus reservation system running in GKE in which we are handling the creation of such reservations with different threads. Due to that, CRUD java methods can sometimes run simultaneously referring to the same bus, resulting in the save in our DB of the LAST simultaneous update only (so the other simultaneous updates are lost).
Even if the probabilities are low (the simultaneous updates need to be really close, 1-2 seconds), we need to avoid this. My question is about how to address the solution:
Lock the bus object and return error to the other simultaneous requests
In-memory map or Redis caché to track the bus requests
Use GCP Pub/Sub, Kafka or RabbitMQ as a queue system.
Try to focus the efforts on reducing the simultaneous time window (reduce from 1-2 seconds up to milliseconds)
Others?
Also, we are worried if in the future the GKE requests handling scalability may be an issue. If we manage a relatively higher number of buses, should we need to implement a queue system between the client and the server? Or GKE load balancer & ambassador will already manages it for us? In case we need a queue system in the future, could it be used also for the collision problem we are facing now?
Last, the reservation requests from the client often takes a while. Therefore, we are changing the requests to be handled asynchronously with a long polling approach from the client to know the task status. Could we link this solution to the current problem? For example, using the Redis caché or the queue system to know the task status? Or should we try to keep the requests synchronous and focus on reducing the processing time (it may be quite difficult).

when will Cosmos DB Client's mapping cache be refreshed?

We are using Cosmos DB SDK whose version is 2.9.2. We perform Document CRUD operations. Usually, the end-to-end P95 latency is 20ms. But sometimes the latency is over 1000ms. The high latency period lasts for 10 hours to 1 day. The collection is not throttling.
We have get some background information from:
https://icm.ad.msft.net/imp/v3/incidents/details/171243015/home
https://icm.ad.msft.net/imp/v3/incidents/details/168242283/home
There are some diagnostics strings in the tickets.
We know that the client maintains a cache of the mapping of logical partition and physical replica address. This mapping may be outdated because of replicas movement or outage. So client tries to read from the second/third replica. However, this retry has significant impact on end to end latency. We also observe that the high latency/timeout can last for several hours, even days. I expect there’s some mechanism of refreshing mapping cache in the client. But it seems the client stops visiting more than one replica only after we redeploy our service.
Here are my questions:
How can the client tell whether it’s unable to connect to a certain replica? Will the client wait until timeout or server tells client that the replica is unavailable?
In which condition the mapping cache will be refreshed? We are using Session consistency and TCP mode.
Will restarting our service force the cache to be refreshed? Or refreshing only happens when the machine restarts?
When we find there’s replica outage, is there any way to quickly mitigate?
What operations are performed (Document CRUD or query)?
And what are the observed latencies & frequencies? Also please check if the collection is throttling (with custom throttling policy).
Client do manage the some metada and does handle its staleness efficiently with-in SLA bounds.
Can you please create a support ticket with account details and 'RequestDiagnostis' and we shall look into it.

Load Testing Thousands of SLOW Connections

I would like to test an upload service with hundreds, if not thousands,
of slow HTTPS connections simultaneously.
I would like to have lots of, say, 3G-quality connections,
each throttled with low bandwidth and high latency,
each sending a few megabytes of data up to the server,
resulting in lots of concurrent, long-lived requests being handled by the server.
There are many load generation tools that can generate thousands of simultaneous requests.
(I'm currently using Locust, mostly so that I can take
advantage of my existing client library written in Python.)
Such tools typically run each concurrent request as fast as possible
over the shared network link.
There are various ways to adjust the apparent bandwidth and latency of TCP connections,
such as Linux's TC
and handy wrappers like Comcast.
As far as I can tell, TC and the like control the shared link
but they cannot throttle the individual requests.
If you want to throttle a single request, TC works well.
In theory, with many clients sharing the same throttled network link,
each request could be run serially,
subject to the constrained bandwidth,
rather than having lots of requests executing concurrently,
a few packets at a time.
The former would result in much fewer active requests executing
concurrently on the server.
I suspect that the tool I want has to actively manage each individual client's sending
and receiving to throttle them fairly.
Is there such a tool?
You can take a look at Apache JMeter, it can "throttle" connections to the throughput configurable via the following properties:
httpclient.socket.http.cps=0
httpclient.socket.https.cps=0
The properties can be defined either in user.properties file or passed to JMeter via -J command-line argument
cps stands for character per second so you can "slow down" JMeter threads (virtual users) to the given throughput rate, the formula for cps calculation is:
cps = (target bandwidth in kbps * 1024) / 8
Check out How to Simulate Different Network Speeds in Your JMeter Load Test for more information.
Yes, these are network simulators. A very primitive one is in the form of WanEM. It is not going to cover your testing needs. You will need something akin to Shunra Storm, a hardware device which can manage individual connections and impairment with models derived from Ookla (think speedtest.com) related to 3,4,5g connections from the wild. Well, perhaps I should say, "could manage," as this product has been absent since the HP acquisition of Shunra.
There are some other market competitors on the network front from companies such as Ixia, Agilent, PacketStorm, Spirent and the like. None of them are inexpensive, but I see your need. Slow, and particularly dirty connections likes cell phones, have a disproportionate impact on the stack and can result in the server running out of resources with fewer mobile connections than desktop ones.
On a side note, be sure you are including a representative model for think time in your test code. If you collapse the client-server model with no or extremely limited think time & impair the network only bad things can happen. This will play particular havoc with both predictability and repeatability on your tests. You may also wind up chasing dozens of engineering ghosts related to load in your code that will not occur in production because of the natural delays and the release of resources which should occur during those windows of activity between client requests.

BizTalk 2010 Determine Host Throttle settings for receive locations

Because a required pipeline components seems to have trouble hitting a database to details of messages, I am planning to use Host Throttling to limit the amount of files BizTalk is processing at the receive location. I want to be able to indicate that X number of messages should be processed within Y seconds (or any other feasible timespan). Does anyone know which throttling settings can be used to force this behavior?
I know how to set values, however I cannot find the best configuration.
(note: one of the solutions might also adjust the pipeline, but it contains third party components which cannot be adjusted.)
From How BizTalk Server Implements Host Throttling BizTalk looks at
Amount of memory in use (both systemwide and host process memory).
Number of in-process messages being delivered or processed (threshold for outbound throttling).
Number of threads in use. Database size, measured by the number of items in the queue tables for all hosts and the number of items in the spool and tracking tables.
Number of concurrent database connections.
Rate of message publishing (inbound)
and delivery or processing (outbound).
The only one that throttles inbound is the Rate of message publishing, but that is possible after the pipeline/port has processed the message so may not be of any use in this scenario, but you would have to test that.
You will probably want to set up that process under it's own host so if it hits throttling thresholds it does not throttle everything else as well.
If possible you should move the component to a send port pipeline as throttling send ports is much more controllable. One way is to set the send port to ordered delivery, although that can cause a backlog especially if you get a suspended message.
I think your most straightforward approach here would be to write a custom adapter. Unfortunately, the out of the box File Adapter does not directly support throttling/polling intervals, and I don't think the suggestions given already would not directly impact the custom pipeline processing if it's directly hitting the DB through ADO.NET (but it couldn't hurt to try). You can set the BatchSize property on the file adapter settings, but even then there's nothing stopping it from submitting that batchsize as fast as it possibly can over and over again.
A custom adapter could be created to wait for a period before submitting additional files for processing. You could base it on the SDK File Adapter sample.

Scalability issue when using outgoing asynchronous web requests on IIS 7.5

A bit of a long description below, but it is a quite tricky problem. I have tried to cover what we do know about the problem in order to narrow down the search. The question is more of an ongoing investigation than a single-question based one but I think it may help others as well. But please add information in comments or correct me if you think I am wrong about some assumptions below.
UPDATE 19/2, 2013: We have cleared some question marks in this and I have a theory of what the main problem is which I'll update below. Not ready to write a "solved" response to it yet though.
UPDATE 24/4, 2013: Things have been stable in production (though I believe it is temporary) for a while now and I think it is due to two reasons. 1) port increase, and 2) reduced number of outgoing (forwarded) requests. I'll continue this update futher down in the correct context.
We are currently doing an investigation in our production environment to determine why our IIS web server does not scale when too many outgoing asynchronous web service requests are being done (one incoming request may trigger multiple outgoing requests).
CPU is only at 20%, but we receive HTTP 503 errors on incoming requests and many outgoing web requests get the following exception: “SocketException: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full” Clearly there is a scalability bottleneck somewhere and we need to find out what it is and if it is possible to solve it by configuration.
Application context:
We are running IIS v7.5 integrated managed pipeline using .NET 4.5 on Windows 2008 R2 64 bit operating system. We use only 1 worker process in IIS. Hardware varies slightly but the machine used for examining the error is an Intel Xeon 8 core (16 hyper threaded).
We use both asynchronous and synchronous web requests. Those that are asynchronous are using the new .NET async support to make each incoming request make multiple HTTP requests in the application to other servers on persisted TCP connections (keep-alive). Synchronous request execution time is low 0-32 ms (longer times occur due to thread context switching). For the asynchronous requests, execution time can be up to 120 ms before the requests are aborted.
Normally each server serves up to ~1000 incoming requests. Outgoing requests are ~300 requests/sec up to ~600 requests/sec when problem starts to arise. Problems only occurs when outgoing async. requests are enabled on the server and we go above a certain level of outgoing requests (~600 req./s).
Possible solutions to the problem:
Searching the Internet on this problem reveals a plethora of possible solutions candidates. Though, they are very much dependent upon versions of .NET, IIS and operating system so it takes time to find something in our context (anno 2013).
Below is a list of solution candidates and the conclusions we have come to so far with regards to our configuration context. I have categorised the detected problem areas, so far in the following main categories:
Some queue(s) fill up
Problems with TCP connections and ports (UPDATE 19/2, 2013: This is the problem)
Too slow allocation of resources
Memory problems (UPDATE 19/2, 2013: This is most likely another problem)
1) Some queue(s) fill up
The outgoing asynchronous request exception message does indicate that some queue of buffer has been filled up. But it does not say which queue/buffer. Via the IIS forum (and blog post referenced there) I have been able to distinguish 4 of possibly 6 (or more) different types of queues in the request pipeline labeled A-F below.
Though it should be stated that of all the below defined queues, we see for certain that the 1.B) ThreadPool performance counter Requests Queued gets very full during the problematic load. So it is likely that the cause of the problem is in .NET level and not below this (C-F).
1.A) .NET Framework level queue?
We use the .NET framework class WebClient for issuing the asynchronous call (async support) as opposed to the HttpClient that we experienced had the same issue but with far lower req/s threshold. We do not know if the .NET Framework implementation hides any internal queue(s) or not above the Thread pool. We don’t think this is the case.
1.B) .NET Thread Pool
The Thread pool acts as a natural queue since the .NET Thread (default) Scheduler is picking threads from the thread pool to be executed.
Performance counter: [ASP.NET v4.0.30319].[Requests Queued].
Configuration possibilities:
(ApplicationPool) maxConcurrentRequestsPerCPU should be 5000 (instead of previous 12). So in our case it should be 5000*16=80.000 requests/sec which should be sufficient enough in our scenario.
(processModel) autoConfig = true/false which allows some threadPool related configuration to be set according to machine configuration. We use true which is a potential error candidate since these values may be set erroneously for our (high) need.
1.C) Global, process wide, native queue (IIS integrated mode only)
If the Thread Pool is full, requests starts to pile up in this native (not-managed) queue.
Performance counter:[ASP.NET v4.0.30319].[Requests in Native Queue]
Configuration possibilities: ????
1.D) HTTP.sys queue
This queue is not the same queue as 1.C) above. Here’s an explanation as stated to me “The HTTP.sys kernel queue is essentially a completion port on which user-mode (IIS) receives requests from kernel-mode (HTTP.sys). It has a queue limit, and when that is exceeded you will receive a 503 status code. The HTTPErr log will also indicate that this happened by logging a 503 status and QueueFull“.
Performance counter: I have not been able to find any performance counter for this queue, but by enabling the IIS HTTPErr log, it should be possible to detect if this queue gets flooded.
Configuration possibilities: This is set in IIS on the application pool, advanced setting: Queue Length. Default value is 1000. I have seen recommendations to increase it to 10.000. Though trying this increase has not solved our issue.
1.E) Operating System unknown queue(s)?
Although unlikely, I guess the OS could actually have a queue somewhere in between the network card buffer and the HTTP.sys queue.
1.F) Network card buffer:
As request arrive to the network card, it should be natural that they are placed in some buffer in order to be picked up by some OS kernel thread. Since this is kernel level execution, and thus fast, it is not likely that it is the culprit.
Windows Performance Counter: [Network Interface].[Packets Received Discarded] using the network card instance.
Configuration possibilities: ????
2) Problems with TCP connections and ports
This is a candidate that pops up here and there, though our outgoing (async) TCP requests are made of a persisted (keep-alive) TCP connection. So as the traffic grows, the number of available ephemeral ports should really only grow due to the incoming requests. And we know for sure that the problem only arises when we have outgoing requests enabled.
However, the problem may still arise due to that the port is allocated during a longer timeframe of the request. An outgoing request may take as long as 120 ms to execute (before the .NET Task (thread) is canceled) which might mean that the number of ports get allocated for a longer time period. Analyzing the Windows Performance Counter, verifies this assumption since the number of TCPv4.[Connection Established] goes from normal 2-3000 to peaks up to almost 12.000 in total when the problem occur.
We have verified that the configured maximum amount of TCP connections is set to the default of 16384. In this case, it may not be the problem, although we are dangerously close to the max limit.
When we try using netstat on the server it mostly returns without any output at all, also using TcpView shows very few items in the beginning. If we let TcpView run for a while it soon starts to show new (incoming) connections quite rapidly (say 25 connections/sec). Almost all connections are in TIME_WAIT state from the beginning, suggesting that they have already completed and waiting for clean up. Do those connections use ephemeral ports? The local port is always 80, and the remote port is increasing. We wanted to use TcpView in order to see the outgoing connections, but we can’t see them listed at all, which is very strange. Can’t these two tools handle the amount of connections we are having?
(To be continued.... But please fill in with info if you know it… )
Furhter more, as a side kick here. It was suggested in this blog post "ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0" that ServicePointManager.DefaultConnectionLimit should be set to int maxValue which otherwise could be a problem. But in .NET 4.5, this is the default already from the start.
UPDATE 19/2, 2013:
It is reasonable to assume that we did in fact hit the max limit of 16.384 ports. We doubled the number of ports on all but one server and only the old server would run into problem when we hit the old peak load of outgoing requests. So why did the TCP.v4.[Connections Established] never show us a higher number than ~12.000 at problem times? MY theory: Most likely, although not established as fact (yet), the Performance Counter TCPv4.[Connections Established] is not equivalent to the number of ports that are currently allocated. I have not had time to catch up on the TCP state studying yet, but I am guessing that there are more TCP states than what the "Connection Established" shows which would render the port as being ccupied. Though since we cannot use the "Connection Established" performance counter as a way to detect the danger of running out of ports, it is important that we find some other way of detecting when reaching this max port range. And as described in the text above, we are not able to use either with NetStat or the application TCPview for this on our production servers. This is a problem! (I'll write more about it in an upcoming response I think to this post)
The number of ports are restricted on windows to some maximum 65.535 (although the first ~1000 should probably not be used). But it should be possible to avoid the problem of running out of ports by decreasing the time for TCP state TIME_WAIT (default to 240 seconds) as described in numerous places.It should free up ports faster. I was first a bit hestitant about this doing this since we use both long running database queries as well as WCF calls on TCP and I wouldn't like to descrease the time constraint. Although not having caught up in my TCP state machine reading yet, I think it might not be a problem after all. The state TIME_WAIT, I think, is only there in order to allow for the handshake of a proper shut down to the client. So the actual data transfer on an existing TCP connection should not time out due to this time limit. Worse case scenario, the client is not shut down properly and it instead neads to time out. I guess all browsers may not be implementing this correctly and it could possibly be a problem on the client side only. Though I am guessing a bit here...
END UPDATE 19/2, 2013
UPDATE 24/4, 2013:
We have increased the number of port to to the maximum value. At the same time we do not get as many forwarded outgoing requests as earlier. These two in combination should be the reason why we have not had any incidents. However, it is only temporary since the number of outgoing requests are bound to increase again in the future on these servers. The problem thus lies in, I think, that port for the incoming requests has to remain open during the time frame for the response of the forwarded requests. In our application, this cancelation limit for these forwarded requests is 120 ms which could be compared with the normal <1ms to handle a non forwarded request. So in essence, I believe the definite number of ports is the major scalability bottleneck on such high throughput servers (>1000 requests/sec on ~16 cores machines) that we are using. This in combination with the GC work on cache reload (se below) makes the server especially vulernable.
END UPDATE 24/4
3) Too slow allocation of resources
Our performance counters show that the number of queued requests in the Thread Pool (1B) fluctuates a lot during the time of the problem. So potentially this means that we have a dynamic situation in which the queue length starts to oscillate due to changes in the environment. For instance, this would be the case if there are flooding protection mechanisms that are activated when traffic is flooding. As it is, we have a number of these mechanisms:
3.A) Web load balancer
When things go really bad and the server responds with a HTTP 503 error, the load balancer will automatically remove the web server from being active in production for a 15 second period. This means that the other servers will take the increased load during the time frame. During the “cooling period”, the server may finish serving its request and it will automatically be reinstated when the load balancer does its next ping. Of course this only is good as long as all servers don’t have a problem at once. Luckily, so far, we have not been in this situation.
3.B) Application specific valve
In the web application, we have our own constructed valve (Yes. It is a "valve". Not a "value") triggered by a Windows Performance Counter for Queued Requests in the thread pool. There is a thread, started in Application_Start, that checks this performance counter value each second. And if the value exceeds 2000, all outgoing traffic ceases to be initiated. The next second, if the queue value is below 2000, outgoing traffic starts again.
The strange thing here is that it has not helped us from reaching the error scenario since we don’t have much logging of this occurring. It may mean that when traffic hits us hard, things goes bad really quickly so that the 1 second time interval check actually is too high.
3.C) Thread pool slow increase (and decrease) of threads
There is another aspect of this as well. When there is a need for more threads in the application pool, these threads gets allocated very slowly. From what I read, 1-2 threads per second. This is so because it is expensive to create threads and since you don’t want too many threads anyways in order to avoid expensive context switching in the synchronous case, I think this is natural. However, it should also mean that if a sudden large burst of traffic hits us, the number of threads are not going to be near enough to satisfy the need in the asynchronous scenario and queuing of requests will start. This is a very likely problem candidate I think. One candidate solution may be then to increase the minimum amount of created threads in the ThreadPool. But I guess this may also effect performance of the synchronously running requests.
4) Memory problems
(Joey Reyes wrote about this here in a blog post)
Since objects get collected later for asynchronous requests (up to 120ms later in our case), memory problem can arise since objects can be promoted to generation 1 and the memory will not be recollected as often as it should. The increased pressure on the Garbage Collector may very well cause extended thread context switching to occur and further weaken capacity of the server.
However, we don’t see an increased GC- nor CPU usage during the time of the problem so we don’t think the suggested CPU throttling mechanism is a solution for us.
UPDATE 19/2, 2013: We use a cache swap mechanism at regular intervalls at which an (almost) full in-memory cache is reload into memory and the old cache can get garbage collected. At these times, the GC will have to work harder and steal resources from the normal request handling. Using Windows Performance counter for thread context switching it shows that the number of context switches decreases significantly from the normal high value at the time of a high GC usage. I think that during such cache reloads, the server is extra vulnernable for queueing up requests and it is necessary to reduce the footprint of the GC. One potential fix to the problem would be to just fill the cache without allocating memory all the time. A bit more work, but it should be doable.
UPDATE 24/4, 2013:
I am still in the middle of the cache reload memory tweak in order to avoid having the GC running as much. But we normally have some 1000 queued requests temporarily when the GC runs. Since it runs on all threads, it is naturall that it steals resources from the normal requests handling. I'll update this status once this tweak has been deployed and we can see a difference.
END UPDATE 24/4
I have implemented a reverse proxy through an Async Http Handler for benchmarking purposes (as a part of my Phd. Thesis) and run into the very same problems as you.
In order to scale it is mandatory to have processModel set to false and fine tune the thread pools. I have found that, contrary to what the documentation regarding processModel defaults says, many of the thread pools are not properly configured when processModel is set to true. The maxConnection setting it is also important as it limits your scalability if the limit is set too low. See http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
Regarding your app running out of ports because of the TIME_WAIT delay on the socket, I have also faced the same problem because I was injecting traffic from a limited set of machines with more than 64k requests in 240 seconds. I lowered the TIME_WAIT to 30 seconds without any problems.
I also mistakenly reused a proxy object to a Web Services endpoint in several threads. Although the proxy doesn't have any state, I found that the GC had a lot of problems collecting the memory associated with its internal buffers (String [] instances) and that caused my app to run out of memory.
Some interesting performance counters that you should monitor are the ones related to Queued requests, requests in execution and request time under the ASP.NET apps category. If you see queued requests or that the execution time is low but the clients see long request times, then you have some sort of contention in your server. Also monitor counters under the LocksAndThreads category looking for contention.
Since asynchronous requests hold up the tcp sockets for longer, maybe you need to look at
maxconnection property within connection management in your web.config?
Please refer to this link: http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
We faced similar problem and tuned this parameter to fix our issue. Maybe this will help you.
Edit: Also, lots of TIME_WAITs indicate a connection leak within the code based on past experience. Possible causes: 1) Not disposing connections used. 2) Incorrect implementation of connection pooling.

Resources