Asp.Net Core HealthChecks HTTP Timeout - asp.net-core-webapi

I have a healthcheck (https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/health-checks?view=aspnetcore-5.0) that merely runs a longer range of SQL queries , a lot of them, after 100 seconds it times out with an error.
HResult":-2146233029,"Message":"The request was canceled due
to the configured HttpClient.Timeout of 100 seconds
elapsing.","Source":"System.Net.Http","InnerException":
{"Type":"System.TimeoutException","HResult":-
2146233083,"Message":"The operation was
canceled.","Source":null,"InnerException":
{"Type":"System.Threading.Tasks.TaskCanceledException","HResult":-
2146233029,"Message":"The operation was
canceled.","Source":"System.Net.Http","InnerException":
{},"CancellationToken":"CancellationRequested:
true","Task":"null"}},"CancellationToken":"CancellationRequested:
true","Task":"null"}
So I tried to add a timespan
services.AddHealthChecks()
.AddCheck<Sys_Version_VersionHealthCheck>("Version"
, HealthStatus.Unhealthy, tags: new[]
{sys_version_settings.LogTag }, new TimeSpan(0, 0, 5, 0, 0));
But it seems to have no effect.
Is there a setting around the HealthChecks that can be set to increase this?

Related

Chromedp - how to get image file size and real dimensions

I am trying to retrieve some information like page load time, first paint but also the images and scripts that are being loaded and their sizes.
I am able to detect everything that is being loaded in terms of images and scripts but when I look at the sizes, they do not match the size which I see in my (Firefox) inspector.
Can anyone please explain to me what I am doing wrong?
I would also like to get to know how long it took to load the specific file.
I came up with this code. This shows me the correct name and mimetype, but as said, the file size is not correct.
chromedp.ListenTarget(ctx, func(ev interface{}) {
switch ev := ev.(type) {
case *network.EventResponseReceived:
eventResponseReceived = append(eventResponseReceived, network.EventResponseReceived{
RequestID: ev.RequestID,
LoaderID: ev.LoaderID,
Timestamp: ev.Timestamp,
Type: ev.Type,
Response: ev.Response,
HasExtraInfo: ev.HasExtraInfo,
FrameID: ev.FrameID,
})
case *network.EventLoadingFinished:
eventLoadingFinished = append(eventLoadingFinished, network.EventLoadingFinished{
RequestID: ev.RequestID,
Timestamp: ev.Timestamp,
EncodedDataLength: ev.EncodedDataLength,
ShouldReportCorbBlocking: ev.ShouldReportCorbBlocking,
})
}
})
for i := range eventResponseReceived {
for i2 := range eventLoadingFinished {
if eventResponseReceived[i].RequestID == eventLoadingFinished[i2].RequestID {
fmt.Println(eventResponseReceived[i].Response.URL)
fmt.Println(eventResponseReceived[i].Response.ResponseTime.Time())
fmt.Println(eventResponseReceived[i2].Response.EncodedDataLength)
fmt.Println(eventResponseReceived[i].Response.MimeType)
}
}
}
I found out that I, in some cases, can get the content-length. But for a lot of files the content-length unfortunately is nil.
eventResponseReceived[i2].Response.Headers["content-length"]
So for the files where no content-length was given, I need a solution.
IIUC, you want to measure the response size and the load time. In this case, I think you should check both the Network.responseReceived event and the Network.loadingFinished event. (The corresponding events in chromedp are network.EventResponseReceived and network.EventLoadingFinished).
I have attached an example request to the end of this answer. I will try to answer the question based on this example request.
Size
It's correct to get the size from the Network.loadingFinished event. Please note that this is not the file size. It's the total number of bytes received for this request. And it could be 0 if the response is loaded from a cache. The following fields from the response field of the Network.responseReceived event would tell you why encodedDataLength is 0 (an example event has been attached, read it first):
fromDiskCache: specifies that the request was served from the disk cache.
fromServiceWorker: specifies that the request was served from the ServiceWorker.
fromPrefetchCache: specifies that the request was served from the prefetch cache.
Don't try to read encodedDataLength from the response field. Because it's just the total number of bytes received for this request so far. There could be more data to receive (reported by the Network.dataReceived events, see the screenshot).
And don't try to read the size from the Content-Length header. It's not always provided, and it's not correct to use it to measure network payload.
Time
I will try to list the metrics in the timing tab (see the screenshot) based on my try and test. If not specified, the data points are from the response.timing field of the Network.responseReceived event.
Stalled: dnsStart. It a connection is reused, it could be sendStart.
DNS Lookup: DnsEnd - DnsStart.
Initial connection: connectEnd - connectStart
SSL: sslEnd - sslStart
Request sent: sendEnd - sendStart
Waiting for server response: receiveHeadersEnd - sendEnd
Content Download: timestamp (from the Network.loadingFinished event) - requestTime - receiveHeadersEnd.
I think you want to check the Waiting for server response metric and the Content Download metric most of the time.
Recommentation
Protocol Monitor is a must-have tool if you want to work with Chrome DevTools Protocol.
Example Request
See the screenshot below:
Here is the Network.responseReceived event (some fields are removed):
{
"requestId": "580832.38",
"timestamp": 40037.455632,
"type": "Image",
"response": {
"url": "https://i.ytimg.com/vi/noIxfDrKx_Q/maxresdefault.jpg",
"status": 200,
"mimeType": "image/jpeg",
"connectionReused": false,
"fromDiskCache": false,
"fromServiceWorker": false,
"fromPrefetchCache": false,
"encodedDataLength": 55,
"timing": {
"requestTime": 40036.989753,
"proxyStart": -1,
"proxyEnd": -1,
"dnsStart": 0.662,
"dnsEnd": 81.792,
"connectStart": 81.792,
"connectEnd": 332.482,
"sslStart": 158.76,
"sslEnd": 332.473,
"workerStart": -1,
"workerReady": -1,
"workerFetchStart": -1,
"workerRespondWithSettled": -1,
"sendStart": 332.906,
"sendEnd": 333.142,
"pushStart": 0,
"pushEnd": 0,
"receiveHeadersEnd": 459.986
},
"responseTime": 1668830468734.381
},
"frameId": "83262FAD65C24B78F2B7E6F884B2B146"
}
And here is the Network.loadingFinished event:
{
"requestId": "580832.38",
"timestamp": 40037.577265,
"encodedDataLength": 50766,
"shouldReportCorbBlocking": false
}

Blazor Connection Disconnected

I have a web app with Client running on Blazor Server. I have set a custom Blazor reconnect modal (<div id="components-reconnect-modal"...) as the documentation says here - Microsoft Docs
Also I have these settings for SignalR and Blazor circuits:
services.AddServerSideBlazor
(options =>
{
options.DisconnectedCircuitMaxRetained = 100;
options.DisconnectedCircuitRetentionPeriod = TimeSpan.FromMinutes(5);
options.JSInteropDefaultCallTimeout = TimeSpan.FromMinutes(1);
options.MaxBufferedUnacknowledgedRenderBatches = 10;
})
.AddHubOptions(options =>
{
options.ClientTimeoutInterval = TimeSpan.FromSeconds(30);
options.EnableDetailedErrors = false;
options.HandshakeTimeout = TimeSpan.FromSeconds(15);
options.KeepAliveInterval = TimeSpan.FromSeconds(15);
options.MaximumReceiveMessageSize = 32 * 1024;
options.StreamBufferCapacity = 10;
});
But I have an annoying problem - Whenever the app is open in a browser tab and stays still with nobody using it it disconnects. It happens very inconsistently and I can't locate the configuration for these custom time periods but I need to enlarge them. Example:
Initialy loaded the app at 11:34:24AM
Leave it like that for a while
In the Console: "Information: Connection disconnected." at 11:55:48AM and my reconnect-modal appears.
How can I enlarge the lifetime of the connection so that it is always bigger than my session timeout. I checked the Private Memory Limit of my app pool but it is unlimited. It happens really inconsistently with the same steps to reproduce. Test 1 - 16 mins 20 sec; Test 2 - 21 mins 58 sec; Test 3 - 34 mins 56 sec...and then after iisreset...Test 4 - 6 mins 28 sec
Please help.
Apparently this is by design. Once the client is idle for some time it essentially stops sending pings to the server. Then the server only knows tha the client has disconnected and drops the details to allow for resource reuse. You can edit the times and count of “inactive” connections as you have done in the CircuitOptions, but at some point if the client is idle it will disconnect.
have a look at:
https://learn.microsoft.com/en-us/aspnet/core/blazor/state-management?view=aspnetcore-3.1&pivots=server
I think that you can go around this error by using this code in _Host.cshtml:
<script>
Blazor.start().then(() => {
Blazor.defaultReconnectionHandler._reconnectionDisplay = {
show: () => {},
update: (d) => {},
rejected: (d) => document.location.reload()
};
});
</script>
Please read also about Modify the reconnection handler (Blazor Server)
See also this answer
Are you hosting on IIS?
Is the default IIS App Pool recycling set to 20 mins of Idle?
You could try and set this to 0.

Asp net core SignalR websocket connection time more then 1 second

I created dot net core SignalR client using https://learn.microsoft.com/en-us/aspnet/signalr/overview/getting-started/tutorial-getting-started-with-signalr
Using StartAsync connection takes more than 1 second.
That is too long.
How can we reduce start time? Any parameters to tweak?
Thanks for response and the rectified link.
Solution is using ASP.NetCore and transport is configured to use WebSocket ONLY.
Server side (Startup/ Configure)
app.UseEndpoints(endpoints =>
{
endpoints.MapHub<LiveHub>("/signalr/livehub", options =>
{
options.Transports = HttpTransportType.WebSockets;
});
});
Console Client application was built as per:
https://github.com/aspnet/AspNetCore.Docs/tree/master/aspnetcore/signalr/dotnet-client/sample/SignalRChatClient
Under Client
connection = new HubConnectionBuilder()
.WithUrl("http://localhost:xxxx/signalr/livehub", options =>
{
options.SkipNegotiation = true;
options.Transports = HttpTransportType.WebSockets;
})
.Build();
Time for code execution was calculated on client, on
await connection.StartAsync();
With multiple test cases, average connection time hovers around 900msec to 1200 msec.
This is a very long time.

ASP.NET Core 2.2 kestrel server's performance issue

I'm facing problem with kestrel server's performance. I have following scenario :
TestClient(JMeter) -> DemoAPI-1(Kestrel) -> DemoAPI-2(IIS)
I'm trying to create a sample application that could get the file content as and when requested.
TestClient(100 Threads) requests to DemoAPI-1 which in turn request to DemoAPI-2. DemoAPI-2 reads a fixed XML file(1 MB max) and returns it's content as a response(In production DemoAPI-2 is not going to be exposed to outside world).
When I tested direct access from TestClient -> DemoAPI-2 I got expected result(good) which is following :
Average : 368ms
Minimum : 40ms
Maximum : 1056ms
Throughput : 40.1/sec
But when I tried to access it through DemoAPI-1 I got following result :
Average : 48232ms
Minimum : 21095ms
Maximum : 49377ms
Throughput : 2.0/sec
As you can see there is a huge difference.I'm not getting even the 10% throughput of DemoAPI-2. I was told has kestrel is more efficient and fast compared to traditional IIS. Also because there is no problem in direct access, I think we can eliminate the possible of problem on DemoAPI-2.
※Code of DemoAPI-1 :
string base64Encoded = null;
var request = new HttpRequestMessage(HttpMethod.Get, url);
var response = await this.httpClient.SendAsync(request, HttpCompletionOption.ResponseContentRead).ConfigureAwait(false);
if (response.StatusCode.Equals(HttpStatusCode.OK))
{
var content = await response.Content.ReadAsByteArrayAsync().ConfigureAwait(false);
base64Encoded = Convert.ToBase64String(content);
}
return base64Encoded;
※Code of DemoAPI-2 :
[HttpGet("Demo2")]
public async Task<IActionResult> Demo2Async(int wait)
{
try
{
if (wait > 0)
{
await Task.Delay(wait);
}
var path = Path.Combine(Directory.GetCurrentDirectory(), "test.xml");
var file = System.IO.File.ReadAllText(path);
return Content(file);
}
catch (System.Exception ex)
{
return StatusCode(500, ex.Message);
}
}
Some additional information :
Both APIs are async.
Both APIs are hosted on different EC2 instances(C5.xlarge Windows Server 2016).
DemoAPI-1(kestrel) is a self-contained API(without reverse proxy)
TestClient(jMeter) is set to 100 thread for this testing.
No other configuration is done for kestrel server as of now.
There are no action filter, middleware or logging that could effect the performance as of now.
Communication is done using SSL on 5001 port.
Wait parameter for DemoAPI2 is set to 0 as of now.
The CPU usage of DEMOAPI-1 is not over 40%.
The problem was due to HttpClient's port exhaustion issue.
I was able to solve this problem by using IHttpClientFactory.
Following article might help someone who faces similar problem.
https://www.stevejgordon.co.uk/httpclient-creation-and-disposal-internals-should-i-dispose-of-httpclient
DEMOAPI-1 performs a non-asynchronous read of the streams:
var bytes = stream.Read(read, 0, DataChunkSize);
while (bytes > 0)
{
buffer += System.Text.Encoding.UTF8.GetString(read, 0, bytes);
// Replace with ReadAsync
bytes = stream.Read(read, 0, DataChunkSize);
}
That can be an issue with throughput on a lot of requests.
Also, I'm not fully aware of why are you not testing the same code with IIS and Kestrel, I would assume you need to make only environmental changes and not the code.

openstack swift: The server has waited too long for the request to be sent by the client

we get few of these now and then:
Caused by: javax.ejb.EJBException: org.jclouds.http.HttpResponseException:
command: PUT {{PUT_URL}}
HTTP/1.1 failed with response: HTTP/1.1 408 Request Timeout;
content: [<html><h1>Request Timeout</h1><p>The server has waited too long for the request to be sent by the client.</p></html>]
retrying later usually works. what causes this exception? is there a way to increase the timeout on swift?
jclouds 1.7.2 includes a fix for this issue:
https://issues.apache.org/jira/browse/JCLOUDS-342
Your question does not have the proper info in detail.
if you are a developer you can use something like:
import static org.jclouds.Constants.*;
Properties overrides = new Properties();
overrides.setProperty(PROPERTY_MAX_CONNECTIONS_PER_CONTEXT, 20 + "");
overrides.setProperty(PROPERTY_MAX_CONNECTIONS_PER_HOST, 0 + "");
overrides.setProperty(PROPERTY_CONNECTION_TIMEOUT, 5000 + "");
overrides.setProperty(PROPERTY_SO_TIMEOUT, 5000 + "");
overrides.setProperty(PROPERTY_IO_WORKER_THREADS, 20 + "");
// unlimited user threads
overrides.setProperty(PROPERTY_USER_THREADS, 0 + "");
Set<Module> wiring = ImmutableSet.of(new EnterpriseConfigurationModule(), new Log4JLoggingModule());
// same properties and wiring can be used for many services, although the limits are per context
blobStoreContext = ContextBuilder.newBuilder("s3")
.credentials(account, key)
.modules(wiring)
.overrides(overrides)
.buildView(BlobStoreContext.class);
computeContext = ContextBuilder.newBuilder("ec2")
.credentials(account, key)
.modules(wiring)
.overrides(overrides)
.buildView(ComputeServiceContext.class);
Following is the quote from JClouds Configuration docs:
Timeout:
Aggregate commands will take as long as necessary to complete, as controlled by FutureIterables.awaitCompletion.
If you need to increase or decrease this, you will need to adjust the property jclouds.request-timeout or Constants.PROPERTY_REQUEST_TIMEOUT.
This is described in the Advanced Configuration section.
If you are dealing with your own cluster then you can go with some possible configuration options present in swift proxy-server-configuration.

Resources