I have a chat app that I made with dotnet core, singalR, and react native. My chat is working well when I publish it on a single server. But when I get publish it in multiple servers by docker swarm. I get this error.
Unable to connect to the server with any of the available transports. WebSockets failed: Error: There was an error with the transport.
By this error message, the app is just sometimes working normally. When I leave the page and return back it is not working again.
I am using ubuntu server. I both aligned the versions of signalR on server and client. They are both using 5.0.3. I don't have proxy server in front of the app and I m using load balancing feature of docker swarm.
Configure Service
var tokenKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Configuration["TokenKey"]));
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(opt =>
{
opt.TokenValidationParameters = new TokenValidationParameters
{
ValidateIssuerSigningKey = true,
IssuerSigningKey = tokenKey,
ValidateAudience = false,
ValidateIssuer = false,
ValidateLifetime = true,
ClockSkew = TimeSpan.Zero
};
opt.Events = new JwtBearerEvents
{
OnMessageReceived = context =>
{
var accessToken = context.Request.Query["access_token"];
var path = context.HttpContext.Request.Path;
if (!string.IsNullOrEmpty(accessToken))
{
if (path.StartsWithSegments("/chat")
|| path.StartsWithSegments("/dialog"))
{
context.Token = accessToken;
}
}
return Task.CompletedTask;
}
};
});
Configure Void
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
endpoints.MapHub<ChatHub>("/chat", opt => { opt.Transports = HttpTransportType.WebSockets; });
endpoints.MapHub<DialogHub>("/dialog", opt => { opt.Transports = HttpTransportType.WebSockets; });
});
When scaling out SignalR to multiple servers, a shared data plane would be needed to manage distributed state, in addition to the network considerations.
As noted in the docs, Microsoft suggests either introducing a Redis backplane or delegating to their managed service, Azure SignalR.
An app that uses SignalR needs to keep track of all its connections,
which creates problems for a server farm. Add a server, and it gets
new connections that the other servers don't know about.
Having used Azure SignalR, it's fairly straightforward to integrate with an ASP.NET Core app. You then have offloaded all the overhead of managing connections from your app.
Related
The following code works fine with my Azure SignalR Services (serverless mode) and I am able to receieve messages/events successfully.
var connection = new HubConnectionBuilder()
.WithUrl(connectionInfo.NegotiationUrl!, options =>
{
options.Headers.Add("x-ms-signalr-userid", "myuserid");
options.Headers.Add("x-functions-key", "mykey");
})
.WithAutomaticReconnect(new[] { TimeSpan.Zero, TimeSpan.Zero, TimeSpan.FromMilliseconds(5) })
.Build();
connection.Closed += excecption =>
{
return Task.CompletedTask;
};
connection.On("onMsg", (Action<object>)(message =>
{
Console.WriteLine(message)
}));
I referenced the .NET MessagePack NuGet package for SignalR and invoked .AddMessagePackProtocol() extension method in the hub connection builder per the code below but stop receiving messages from SignalR.
var connection = new HubConnectionBuilder()
.WithUrl(connectionInfo.NegotiationUrl!, options =>
{
options.Headers.Add("x-ms-signalr-userid", "myuserid");
options.Headers.Add("x-functions-key", "mykey");
})
.AddMessagePackProtocol()
.WithAutomaticReconnect(new[] { TimeSpan.Zero, TimeSpan.Zero, TimeSpan.FromMilliseconds(5) })
.Build();
Am I missing anything in this configuration? What is the right approach to solve this problem? I don't think if we need to do anything on Azure SignalR Service configuration to start getting messagePack packets.
I expect to receive the signalR messages when the message pack protocol is enabled.
The chosen Azure signalR service transport type is Persistent, and the messagePack protocol is not supported in this mode per this article.
We have a .Net 6 MVC app using the built in UseSession storing the sessions in distributed cache(Redis) using Microsoft.Extensions.Caching.StackExchangeRedis package.
We have noticed that the redis server is taking an enormous amount of connections recieved compared to the number of requests recieved.
Setup DIstributed cache in ConfigureServices:
services.AddStackExchangeRedisCache(options =>
{
options.Configuration = configuration["RedisCache:ConnectionString"];
});
Setup Session in ConfigureServices:
services.AddSession(options =>
{
options.Cookie.Name = configuration.GetValue<string>("Session:CookieName");
options.IdleTimeout = TimeSpan.FromMinutes(configuration.GetValue<int>("Session:SessionTimeout"));
options.Cookie.HttpOnly = true;
options.Cookie.IsEssential = true;
});
Config in Configure:
app.UseSession();
Any idea whats happening? The Connected Clients stays stable at 5, but the connections recieved is not proportionate to the amount of keys/users on the site.
The Redis is a Azure Cache For Redis with a C1-instance.
Redis
I have an API app created using asp core. I'm trying to enforce use of client certificates as described here.
I did tell Kestrel to require certificates in Program.cs:
webBuilder.ConfigureKestrel(o =>
{
o.ConfigureHttpsDefaults(o => o.ClientCertificateMode = ClientCertificateMode.RequireCertificate);
});
And I did add event handler in Startup.cs:
services.AddAuthentication(CertificateAuthenticationDefaults.AuthenticationScheme)
.AddCertificate(options =>
{
options.Events = new CertificateAuthenticationEvents
{
OnCertificateValidated = context =>
{
}
};
});
When I debug the API running locally it still doesn't require any certificates. If I provide certificate anyway, the breakpoint in the event handler is never hit.
I have an ASP .Net Core API Project. In this project I am using JWTBearer Authentication. I am also using the AddDistributedRedisCache feature of the .Net Core Dependency Injection. (Both shown below)
We have a need to blacklist the tokens on occasion (Admin user removing rights, logout, etc) so that these can take immediate effect. Essentially forcing a user to log back in before the next call can be made.
We are adding the JWT Tokens to the redis cache as well as removing them from the client side cache on logout. But a user could (in theory) store the JWT token, and still gain access until the token expires, unless we intercept the call and check it against the blacklist.
How can I access the distributed cache object in the "OnTokenValidated" event in the code below? Do I have to manually create a new connection each time? We are only checking valid tokens, as that will stop invalid requests from even being checked against the blacklist.
Bearer Token Config:
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(options =>
{
options.TokenValidationParameters = new TokenValidationParameters
{
ValidateIssuer = true,
ValidateAudience = true,
ValidateLifetime = true,
ValidateIssuerSigningKey = true,
ValidIssuer = "localhost:5000",
ValidAudience = "localhost:5000",
IssuerSigningKey = new SymmetricSecurityKey(
Encoding.UTF8.GetBytes(Configuration.GetValue<string>("SigningKey"))),
};
options.Events = new JwtBearerEvents
{
OnTokenValidated = context => {
//context.Fail("User has been logged out");
return Task.CompletedTask;
}
};
});
Redis Cache Config:
services.AddDistributedRedisCache(option =>
{
option.Configuration = Configuration.GetValue<string>("RedisCacheAddress");
option.InstanceName = Configuration.GetValue<string>("RedisCacheInstance");
});
You can access services in DI utilizing the HttpContext available there:
OnTokenValidated = ctx =>
{
var cache = ctx.HttpContext.RequestServices.GetRequiredService<IDistributedCache>();
return Task.CompletedTask;
}
GetRequiredService will throw an exception if the service is not found. You can use GetService<T>() if you want the service to be optional.
I have a ASP.Core web app that uses windows authentication I am trying to setup integration tests for.
inside the startup the authorization is configured as follows
services.Configure<IISOptions>(options =>
{
options.ForwardWindowsAuthentication = true;
});
services.AddAuthorization(options =>
{
options.AddPolicy("SiteRead", policy => policy.RequireAssertion(
context => context.User.HasClaim(
x => x.Value == "groupSidHere"
)));
});
services.AddMvc(config =>
{
var policy = new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.Build();
config.Filters.Add(new AuthorizeFilter(policy));
});
The test is as follows
var server = new TestServer(builder);
var client = server.CreateClient();
var response = await client.GetAsync("/");
response.EnsureSuccessStatusCode();
The test fails with the following response
InvalidOperationException: No authentication handler is configured to handle the scheme: Automatic
All the documentation I have been able to find for integration tests doesn't cover this scenario(windows auth). Has anyone found a solution to this?
See this issue, they say:
We ended up solving our need for Windows auth with TestServer by creating a little library that will inject some windows auth services into the pipeline to emulate the behavior provided by IIS - you can find it at
You will find their library "IntelliTect.AspNetCore.TestHost.WindowsAuth" here.
I faced the same issue, and that library worked for me!
And it actually inject real windows authentication data, not just a mock data.