If only one SockJS (polling XHR) connection is open, then the app works fine. As soon as I additionally open it in a new window, then the connections will periodically be closed. The client is SockJS client, the backend is Spring Boot with MVC and SockJS enabled.
This is what I see on the server side:
2022-03-14 10:40:20.992 DEBUG 752 --- [nio-8311-exec-7] a.w.b.u.d.websocket.WebSocketHandler : connection opened, id: gqgmlrff
2022-03-14 10:40:21.015 DEBUG 752 --- [nio-8311-exec-9] a.w.b.u.d.websocket.WebSocketHandler : Server received message: {"action":"subscribe","payload":{"id":"c910f5d1-9e16-4e30-9559-e0e27973177b","entityType":"PROJEKT"}}
after 15-20 seconds
2022-03-14 10:40:40.075 DEBUG 752 --- [ SockJS-10] a.w.b.u.d.websocket.WebSocketHandler : connection closed, sessionId: gqgmlrff, status: CloseStatus[code=3000, reason=Go away!]
This repeats infinitely with sessions of both windows.
It seems that the session closing will be initiated by the backend, since on client side the .onclose() handler will be executed.
The reason was that for both app instances different auth tokens have been generated and the equality wasn't updated. After updating equals() and hashCode() as follow the requests were correctly processed.
override fun equals(obj: Any?): Boolean = obj is KeycloakToken &&
principal == obj.principal &&
authorities == obj.authorities &&
isAuthenticated == obj.isAuthenticated
override fun hashCode(): Int {
var code = 31
for (authority in authorities) {
code = code xor authority.hashCode()
}
code = code xor principal.hashCode()
if (this.isAuthenticated) {
code = code xor -37
}
return code
}
Related
I have a web app with Client running on Blazor Server. I have set a custom Blazor reconnect modal (<div id="components-reconnect-modal"...) as the documentation says here - Microsoft Docs
Also I have these settings for SignalR and Blazor circuits:
services.AddServerSideBlazor
(options =>
{
options.DisconnectedCircuitMaxRetained = 100;
options.DisconnectedCircuitRetentionPeriod = TimeSpan.FromMinutes(5);
options.JSInteropDefaultCallTimeout = TimeSpan.FromMinutes(1);
options.MaxBufferedUnacknowledgedRenderBatches = 10;
})
.AddHubOptions(options =>
{
options.ClientTimeoutInterval = TimeSpan.FromSeconds(30);
options.EnableDetailedErrors = false;
options.HandshakeTimeout = TimeSpan.FromSeconds(15);
options.KeepAliveInterval = TimeSpan.FromSeconds(15);
options.MaximumReceiveMessageSize = 32 * 1024;
options.StreamBufferCapacity = 10;
});
But I have an annoying problem - Whenever the app is open in a browser tab and stays still with nobody using it it disconnects. It happens very inconsistently and I can't locate the configuration for these custom time periods but I need to enlarge them. Example:
Initialy loaded the app at 11:34:24AM
Leave it like that for a while
In the Console: "Information: Connection disconnected." at 11:55:48AM and my reconnect-modal appears.
How can I enlarge the lifetime of the connection so that it is always bigger than my session timeout. I checked the Private Memory Limit of my app pool but it is unlimited. It happens really inconsistently with the same steps to reproduce. Test 1 - 16 mins 20 sec; Test 2 - 21 mins 58 sec; Test 3 - 34 mins 56 sec...and then after iisreset...Test 4 - 6 mins 28 sec
Please help.
Apparently this is by design. Once the client is idle for some time it essentially stops sending pings to the server. Then the server only knows tha the client has disconnected and drops the details to allow for resource reuse. You can edit the times and count of “inactive” connections as you have done in the CircuitOptions, but at some point if the client is idle it will disconnect.
have a look at:
https://learn.microsoft.com/en-us/aspnet/core/blazor/state-management?view=aspnetcore-3.1&pivots=server
I think that you can go around this error by using this code in _Host.cshtml:
<script>
Blazor.start().then(() => {
Blazor.defaultReconnectionHandler._reconnectionDisplay = {
show: () => {},
update: (d) => {},
rejected: (d) => document.location.reload()
};
});
</script>
Please read also about Modify the reconnection handler (Blazor Server)
See also this answer
Are you hosting on IIS?
Is the default IIS App Pool recycling set to 20 mins of Idle?
You could try and set this to 0.
I use aws-neptune.
And I try to implement my queries as transactional(with sessionClient like: https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-sessions.html). But when I try to implement it, closing client throws exception. There is similar issue like my case: https://groups.google.com/g/janusgraph-users/c/N1TPbUU7Szw
My code looks like:
#Bean
public Cluster gremlinCluster()
{
return Cluster.build()
.addContactPoint(GREMLIN_ENDPOINT)
.port(GREMLIN_PORT)
.enableSsl(GREMLIN_SSL_ENABLED)
.keyCertChainFile("classpath:SFSRootCAG2.pem")
.create();
}
private void runInTransaction()
{
String sessionId = UUID.randomUUID().toString();
Client.SessionedClient client = cluster.connect(sessionId);
try
{
client.submit("query...");
}
finally
{
if (client != null)
{
client.close();
}
}
}
And exception is:
INFO (ConnectionPool.java:225) - Signalled closing of connection pool on Host{address=...} with core size of 1
WARN (Connection.java:322) - Timeout while trying to close connection on ... - force closing - server will close session on shutdown or expiration.
java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1771)
Is there any suggestion?
This might be a connectivity problem with the server which you are not able to observe while sending the query because you are not waiting for the future to complete.
When you do a client.submit("query...");, you receive a future. You need to wait for that future to complete to observe any exceptions (or success).
I would suggest the following:
Try hitting the server with a health status call using curl to verify connectivity with the server.
Replace the client.submit("query..."); with client.submit("query...").all().join(); to get the error during connection with the server.
I'm facing problem with kestrel server's performance. I have following scenario :
TestClient(JMeter) -> DemoAPI-1(Kestrel) -> DemoAPI-2(IIS)
I'm trying to create a sample application that could get the file content as and when requested.
TestClient(100 Threads) requests to DemoAPI-1 which in turn request to DemoAPI-2. DemoAPI-2 reads a fixed XML file(1 MB max) and returns it's content as a response(In production DemoAPI-2 is not going to be exposed to outside world).
When I tested direct access from TestClient -> DemoAPI-2 I got expected result(good) which is following :
Average : 368ms
Minimum : 40ms
Maximum : 1056ms
Throughput : 40.1/sec
But when I tried to access it through DemoAPI-1 I got following result :
Average : 48232ms
Minimum : 21095ms
Maximum : 49377ms
Throughput : 2.0/sec
As you can see there is a huge difference.I'm not getting even the 10% throughput of DemoAPI-2. I was told has kestrel is more efficient and fast compared to traditional IIS. Also because there is no problem in direct access, I think we can eliminate the possible of problem on DemoAPI-2.
※Code of DemoAPI-1 :
string base64Encoded = null;
var request = new HttpRequestMessage(HttpMethod.Get, url);
var response = await this.httpClient.SendAsync(request, HttpCompletionOption.ResponseContentRead).ConfigureAwait(false);
if (response.StatusCode.Equals(HttpStatusCode.OK))
{
var content = await response.Content.ReadAsByteArrayAsync().ConfigureAwait(false);
base64Encoded = Convert.ToBase64String(content);
}
return base64Encoded;
※Code of DemoAPI-2 :
[HttpGet("Demo2")]
public async Task<IActionResult> Demo2Async(int wait)
{
try
{
if (wait > 0)
{
await Task.Delay(wait);
}
var path = Path.Combine(Directory.GetCurrentDirectory(), "test.xml");
var file = System.IO.File.ReadAllText(path);
return Content(file);
}
catch (System.Exception ex)
{
return StatusCode(500, ex.Message);
}
}
Some additional information :
Both APIs are async.
Both APIs are hosted on different EC2 instances(C5.xlarge Windows Server 2016).
DemoAPI-1(kestrel) is a self-contained API(without reverse proxy)
TestClient(jMeter) is set to 100 thread for this testing.
No other configuration is done for kestrel server as of now.
There are no action filter, middleware or logging that could effect the performance as of now.
Communication is done using SSL on 5001 port.
Wait parameter for DemoAPI2 is set to 0 as of now.
The CPU usage of DEMOAPI-1 is not over 40%.
The problem was due to HttpClient's port exhaustion issue.
I was able to solve this problem by using IHttpClientFactory.
Following article might help someone who faces similar problem.
https://www.stevejgordon.co.uk/httpclient-creation-and-disposal-internals-should-i-dispose-of-httpclient
DEMOAPI-1 performs a non-asynchronous read of the streams:
var bytes = stream.Read(read, 0, DataChunkSize);
while (bytes > 0)
{
buffer += System.Text.Encoding.UTF8.GetString(read, 0, bytes);
// Replace with ReadAsync
bytes = stream.Read(read, 0, DataChunkSize);
}
That can be an issue with throughput on a lot of requests.
Also, I'm not fully aware of why are you not testing the same code with IIS and Kestrel, I would assume you need to make only environmental changes and not the code.
I'm trying to figure out why my webservice is so slow and find ways to get it to respond faster. Current average response time without custom processing involved (i.e. apicontroller action returning a very simple object) is about 75ms.
The setup
Machine:
32GB RAM, SSD disk, 4 x 2.7Ghz CPU's, 8 logical processors, x64 Windows 10
Software:
1 asp.net mvc website running .net 4.0 on IISEXPRESS (System.Web.Mvc v5.2.7.0)
1 asp.net web api website running .net 4.0 on IISEXPRESS (System.Net.Http v4.2.0.0)
1 RabbitMQ messagebus
Asp.net Web API Code (Api Controller Action)
[Route("Send")]
[HttpPost]
[AllowAnonymous)
public PrimitiveTypeWrapper<long> Send(WebsiteNotificationMessageDTO notification)
{
_messageBus.Publish<IWebsiteNotificationCreated>(new { Notification = notification });
return new PrimitiveTypeWrapper<long>(1);
}
The body of this method takes 2ms. Stackify tells me there's a lot of overhead on the AuthenticationFilterResult.ExecuteAsync method but since it's an asp.net thing I don't think it can be optimized much.
Asp.net MVC Code (MVC Controller Action)
The RestClient implementation is shown below. The HttpClientFactory returns a new HttpClient instance with the necessary headers and basepath.
public async Task<long> Send(WebsiteNotificationMessageDTO notification)
{
var result = await _httpClientFactory.Default.PostAndReturnAsync<WebsiteNotificationMessageDTO, PrimitiveTypeWrapper<long>>("/api/WebsiteNotification/Send", notification);
if (result.Succeeded)
return result.Data.Value;
return 0;
}
Executing 100 requests as fast as possible on the backend rest service:
[HttpPost]
public async Task SendHundredNotificationsToMqtt()
{
var sw = new Stopwatch();
sw.Start();
for (int i = 0; i < 100; i++)
{
await _notificationsRestClient.Send(new WebsiteNotificationMessageDTO()
{
Severity = WebsiteNotificationSeverity.Informational,
Message = "Test notification " + i,
Title = "Test notification " + i,
UserId = 1
});
}
sw.Stop();
Debug.WriteLine("100 messages sent, took {0} ms", sw.ElapsedMilliseconds);
}
This takes on average 7.5 seconds.
Things I've tried
Checked the number of available threads on both the REST service and the MVC website:
int workers;
int completions;
System.Threading.ThreadPool.GetMaxThreads(out workers, out completions);
which returned for both:
Workers: 8191
Completions: 1000
Removed all RabbitMQ messagebus connectivity to ensure it's not the culprit. I've also removed the messagebus publish method from the rest method _messageBus.Publish<IWebsiteNotificationCreated>(new { Notification = notification }); So all it does is return 1 inside a wrapping object.
The backend rest is using identity framework with bearer token authentication and to eliminate most of it I've also tried marking the controller action on the rest service as AllowAnonymous.
Ran the project in Release mode: No change
Ran the sample 100 requests twice to exclude service initialization cost: No change
After all these attempts, the problem remains, it will still take about +- 75ms per request. Is this as low as it goes?
Here's a stackify log for the backend with the above changes applied.
The web service remains slow, is this as fast as it can get without an expensive hardware upgrade or is there something else I can look into to figure out what's making my web service this slow?
I have write a test code in a new web application as below:
public ActionResult Index()
{
Logger.Write("start Index,threadId:" + System.Threading.Thread.CurrentThread.ManagedThreadId);
MyMethodAsync(System.Web.HttpContext.Current.Request);//no await and has warning
Logger.Write("end Index,threadId:" + System.Threading.Thread.CurrentThread.ManagedThreadId);
return View();
}
private async Task MyMethodAsync(HttpRequest request)
{
Logger.Write("start MyMethodAsync,threadId:" + System.Threading.Thread.CurrentThread.ManagedThreadId);
await SomeMethodAsync(request);
Logger.Write("end MyMethodAsync,threadId:" + System.Threading.Thread.CurrentThread.ManagedThreadId);
}
And here is the log:
2017-11-15 19:55:31.904 start Index,threadId:35
2017-11-15 19:55:31.919 start MyMethodAsync,threadId:35
2017-11-15 19:55:31.919 end Index,threadId:35
2017-11-15 19:55:53.324 end MyMethodAsync,threadId:46
The client brower will receive response at about 2017-11-15 19:55:32 and it accord with my understanding. In my actual project production environment,it writes the same log as above, However, the client brower received response in about 22 seconds later at about 2017-11-15 19:55:54. It seems that even the main thread complete the work, the main thread do not return the response until the new thread complete the work.
I have debug this problem serveral days. Could you help me please?
async-await does not change the HTTP protocol. The request goes to the server, the server produces a response and sends it to the client.
It just changes how ASP.NET requests are processed by ASP.NET.
And it doesn't make the request handling faster. Quite the contrary.
But it does use more thread pool threads and makes the server more responsive under heavy load.