TelemetryProcessor - Multiple instances overwrite Custom Properties - azure-application-insights

I have a very basic http-POST triggered api which creates a TelemetryClient. I needed to provide a custom property in this telemetry for each individual request, so I implemented a TelemtryProcessor.
However, when subsequent POST requests are handled and a new TelemetryClient is created that seems to interfere with the first request. I end up seeing maybe a dozen or so entries in App Insights containing the first customPropertyId, and close to 500 for the second, when in reality the number should be split evenly. It seems as though the creation of the 2nd TelemetryClient somehow interferes with the first.
Basic code is below, if anyone has any insight (no pun intended) as to why this might occur, I would greatly appreciate it.
ApiController which handles the POST request:
public class TestApiController : ApiController
{
public HttpResponseMessage Post([FromBody]RequestInput request)
{
try
{
Task.Run(() => ProcessRequest(request));
return Request.CreateResponse(HttpStatusCode.OK);
}
catch (Exception)
{
return Request.CreateErrorResponse(HttpStatusCode.InternalServerError, Constants.GenericErrorMessage);
}
}
private async void ProcessRequest(RequestInput request)
{
string customPropertyId = request.customPropertyId;
//trace handler creates the TelemetryClient for custom property
CustomTelemetryProcessor handler = new CustomTelemetryProcessor(customPropertyId);
//etc.....
}
}
CustomTelemetryProcessor which creates the TelemetryClient:
public class CustomTelemetryProcessor
{
private readonly string _customPropertyId;
private readonly TelemetryClient _telemetryClient;
public CustomTelemetryProcessor(string customPropertyId)
{
_customPropertyId = customPropertyId;
var builder = TelemetryConfiguration.Active.TelemetryProcessorChainBuilder;
builder.Use((next) => new TelemetryProcessor(next, _customPropertyId));
builder.Build();
_telemetryClient = new TelemetryClient();
}
}
TelemetryProcessor:
public class TelemetryProcessor : ITelemetryProcessor
{
private string CustomPropertyId { get; }
private ITelemetryProcessor Next { get; set; }
// Link processors to each other in a chain.
public TelemetryProcessor(ITelemetryProcessor next, string customPropertyId)
{
CustomPropertyId = customPropertyId;
Next = next;
}
public void Process(ITelemetry item)
{
if (!item.Context.Properties.ContainsKey("CustomPropertyId"))
{
item.Context.Properties.Add("CustomPropertyId", CustomPropertyId);
}
else
{
item.Context.Properties["CustomPropertyId"] = CustomPropertyId;
}
Next.Process(item);
}
}

It's better to avoid creating Telemetry Client per each request, isntead re-use single static Telemetry Client instance. Telemetry Processors and/or Telemetry Initializers should also typically be registered only once for the telemetry pipeline and not for every request. TelemetryConfiguration.Active is static and by adding new Processor with each request the queue of processor only grows.
The appropriate setup would be to add Telemetry Initializer (Telemetry Processors are typically used for filtering and Initializers for data enrichment) once into the telemetry pipeline, e.g. though adding an entry to ApplicationInsights.config file (if present) or via code on TelemetryConfiguration.Active somewhere in global.asax, e.g. Application_Start:
TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer());
Initializers are executed in the same context/thread where Track..(..) was called / telemetry was created, so they will have access to the thread local storage and or local objects to read parameters/values from.

Related

Using Akka.net with Asp.net on a Modular Monolith architecture

Iwould like to implement a rest service using Akka and Asp.net.
Following the example here
I create my AkkaService containing the FooActor ref and a controller who transform the http request to a RunProcess message which is sent to the FooActor.
[Route("api/[controller]")]
[ApiController]
public class MyController : Controller
{
private readonly ILogger<MyController> _logger;
private readonly IAkkaService Service;
public RebalancingController(ILogger<MyController> logger, IAkkaService bridge)
{
_logger = logger;
Service = bridge;
}
[HttpGet]
public async Task<ProcessTerminated> Get()
{
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(60));
return await Service.RunProcess(cts.Token);
}
}
public class AkkaService : IAkkaService, IHostedService
{
private ActorSystem ActorSystem { get; set; }
public IActorRef FooActor { get; private set; }
private readonly IServiceProvider ServiceProvider;
public AkkaService(IServiceProvider sp)
{
ServiceProvider = sp;
}
public async Task StartAsync(CancellationToken cancellationToken)
{
var hocon = ConfigurationFactory.ParseString(await File.ReadAllTextAsync("app.conf", cancellationToken));
var bootstrap = BootstrapSetup.Create().WithConfig(hocon);
var di = DependencyResolverSetup.Create(ServiceProvider);
var actorSystemSetup = bootstrap.And(di);
ActorSystem = ActorSystem.Create("AkkaSandbox", actorSystemSetup);
// </AkkaServiceSetup>
// <ServiceProviderFor>
// props created via IServiceProvider dependency injection
var fooProps = DependencyResolver.For(ActorSystem).Props<FooActor>();
FooActor = ActorSystem.ActorOf(rebalProps.WithRouter(FromConfig.Instance), "foo");
// </ServiceProviderFor>
await Task.CompletedTask;
}
public async Task<ProcessTerminated> RunProcess(CancellationToken token)
{
return await FooActor.Ask<ProcessTerminated>(new RunProcess(), token);
}
public FooActor(IServiceProvider sp)
{
_scope = sp.CreateScope();
Receive<RunProcess>(x =>
{
var basketActor = Context.ActorOf(Props.Create<BarActor>(sp), "BarActor");
basketActor.Tell(new BarRequest());
_log.Info($"Sending a request to Bar Actor ");
});
Receive<BarResponse>(x =>
{
...... Here I need to send back a ProcessTerminated message to the controller
});
}
Now, let's imagine the FooActor send a message to the BarActor telling him to perform a given task and wait the BarResponse. How could I send back the ProcessTerminated message to the controller?
Few points to take into considerations:
I want to ensure no coupling between BarActor and FooActor.
By example, I could add the original sender ActorRef to the BarRequest and
BarResponse. But the BarActor musn't know about the fooActor and
MyController. The structure of the messages an how the barActor
respond should not be dependent of what the FooActor do with the
BarResponse.
In the example I only use BarActor, but we can imagine to have many different actors
exchanging messages before returning the final result to the controller.
Nitpick: you should use Akka.Hosting and avoid creating this mock wrapper service around the ActorSystem. That will allow you to pass in the ActorRegistry directly into your controller, which you can use to then access FooActor without the need for additional boilerplate. See "Introduction to Akka.Hosting - HOCONless, "Pit of Success" Akka.NET Runtime and Configuration" video for a fuller explanation.
Next: to send the ProcessTerminated message back to your controller you need to save the Sender (the IActorRef that points to the temporary actor created by Ask<T>, in this instance) during your Receive<RunProcess> and make sure that this value is available inside your Receive<BarResponse>.
The simple ways to accomplish that:
Store the Sender in a field on the FooActor, use behavior-switching while you wait for the BarActor to respond, and then revert back to your original behavior.
Build a Dictionary<RunProcess, IActorRef> (the key should probably actually be some unique ID shared by RunProcess and BarResponse - a "correlation id") and reply to the corresponding IActorRef stored in the dictionary when BarResponse is received. Remove the entry after processing.
Propagate the Sender in the BarRequest and BarResponse message payloads themselves.
All three of those would work. If I thought there were going to be a large number of RunProcess requests running in parallel I'd opt for option 2.
Another way of doing it is by simply forwarding the next message to the next actor. The Tell operation have a second parameter that can be used to override the message sender. If you're sure that all path has to respond back to the original Ask inside the Http controller, you can do this inside the FooActor:
Receive<RunProcess>(x =>
{
var basketActor = Context.ActorOf(Props.Create<BarActor>(sp), "BarActor");
basketActor.Tell(new BarRequest(), Sender);
_log.Info($"Sending a request to Bar Actor ");
});
This way, the original Ask actor is considered as the sender of the new BarRequest message instead of the FooActor, and if BarActor decide to reply by doing a Sender.Tell(new ProcessTerminated()). the ProcessTerminated message will be sent to the Http controller.

Spring OAuth2 Making `state` param at least 32 characters long

I am attempting to authorize against an external identity provider. Everything seems setup fine, but I keep getting a validation error with my identity provider because the state parameter automatically tacked onto my authorization request is not long enough:
For example:
&state=uYG5DC
The requirements of my IDP say that this state param must be at least 32-characters long. How can I programmatically increase the size of this auto-generated number?
Even if I could generate this number myself, it is not possible to override with other methods I have seen suggested. The following attempt fails because my manual setting of ?state=abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz is superceded by the autogenerated param placed after it during the actual request:
#Bean
public OAuth2ProtectedResourceDetails loginGovOpenId() {
AuthorizationCodeResourceDetails details = new AuthorizationCodeResourceDetails() {
#Override
public String getUserAuthorizationUri() {
return super.getUserAuthorizationUri() + "?state=abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz";
}
};
details.setClientId(clientId);
details.setAccessTokenUri(accessTokenUri);
details.setUserAuthorizationUri(userAuthorizationUri);
details.setScope(Arrays.asList("openid", "email"));
details.setPreEstablishedRedirectUri(redirectUri);
details.setUseCurrentUri(true);
return details;
}
The 6-character setting seems to be set here, is there a way to override this?
https://github.com/spring-projects/spring-security-oauth/blob/master/spring-security-oauth2/src/main/java/org/springframework/security/oauth2/common/util/RandomValueStringGenerator.java
With the help of this post:
spring security StateKeyGenerator custom instance
I was able to come up with a working solution.
In my configuration class marked with these annotations:
#Configuration
#EnableOAuth2Client
I configured the following beans:
#Bean
public OAuth2ProtectedResourceDetails loginGovOpenId() {
AuthorizationCodeResourceDetails details = new AuthorizationCodeResourceDetails();
AuthorizationCodeResourceDetails details = new
details.setClientId(clientId);
details.setClientSecret(clientSecret);
details.setAccessTokenUri(accessTokenUri);
details.setUserAuthorizationUri(userAuthorizationUri);
details.setScope(Arrays.asList("openid", "email"));
details.setPreEstablishedRedirectUri(redirectUri);
details.setUseCurrentUri(true);
return details;
}
#Bean
public StateKeyGenerator stateKeyGenerator() {
return new CustomStateKeyGenerator();
}
#Bean
public AccessTokenProvider accessTokenProvider() {
AuthorizationCodeAccessTokenProvider accessTokenProvider = new AuthorizationCodeAccessTokenProvider();
accessTokenProvider.setStateKeyGenerator(stateKeyGenerator());
return accessTokenProvider;
}
#Bean
public OAuth2RestTemplate loginGovOpenIdTemplate(final OAuth2ClientContext clientContext) {
final OAuth2RestTemplate template = new OAuth2RestTemplate(loginGovOpenId(), clientContext);
template.setAccessTokenProvider(accessTokenProvider());
return template;
}
Where my CustomStateKeyGenerator implementation class looks as follows:
public class CustomStateKeyGenerator implements StateKeyGenerator {
// login.gov requires state to be at least 32-characters long
private static int length = 32;
private RandomValueStringGenerator generator = new RandomValueStringGenerator(length);
#Override
public String generateKey(OAuth2ProtectedResourceDetails resource) {
return generator.generate();
}
}

Using Unity Dependency Injection in Multi-User Web Application: Second User to Log In Causes First User To See Second User's Data

I'm trying to implement a web application using ASP.NET MVC and the Microsoft Unity DI framework. The application needs to support multiple user sessions at the same time, each of them with their own connection to a separate database (but all users using the same DbContext; the database schemas are identical, it's just the data that is different).
Upon a user's log-in, I register the necessary type mappings to the application's Unity container, using a session-based lifetime manager that I found in another question here.
My container is initialized like this:
// Global.asax.cs
public static UnityContainer CurrentUnityContainer { get; set; }
protected void Application_Start()
{
// ...other code...
CurrentUnityContainer = UnityConfig.Initialize();
// misc services - nothing data access related, apart from the fact that they all depend on IRepository<ClientContext>
UnityConfig.RegisterComponents(CurrentUnityContainer);
}
// UnityConfig.cs
public static UnityContainer Initialize()
{
UnityContainer container = new UnityContainer();
DependencyResolver.SetResolver(new UnityDependencyResolver(container));
GlobalConfiguration.Configuration.DependencyResolver = new Unity.WebApi.UnityDependencyResolver(container);
return container;
}
This is the code that's called upon logging in:
// UserController.cs
UnityConfig.RegisterUserDataAccess(MvcApplication.CurrentUnityContainer, UserData.Get(model.AzureUID).CurrentDatabase);
// UnityConfig.cs
public static void RegisterUserDataAccess(IUnityContainer container, string databaseName)
{
container.AddExtension(new DataAccessDependencies(databaseName));
}
// DataAccessDependencies.cs
public class DataAccessDependencies : UnityContainerExtension
{
private readonly string _databaseName;
public DataAccessDependencies(string databaseName)
{
_databaseName = databaseName;
}
protected override void Initialize()
{
IConfigurationBuilder configurationBuilder = Container.Resolve<IConfigurationBuilder>();
Container.RegisterType<ClientContext>(new SessionLifetimeManager(), new InjectionConstructor(configurationBuilder.GetConnectionString(_databaseName)));
Container.RegisterType<IRepository<ClientContext>, RepositoryService<ClientContext>>(new SessionLifetimeManager());
}
}
// SessionLifetimeManager.cs
public class SessionLifetimeManager : LifetimeManager
{
private readonly string _key = Guid.NewGuid().ToString();
public override void RemoveValue(ILifetimeContainer container = null)
{
HttpContext.Current.Session.Remove(_key);
}
public override void SetValue(object newValue, ILifetimeContainer container = null)
{
HttpContext.Current.Session[_key] = newValue;
}
public override object GetValue(ILifetimeContainer container = null)
{
return HttpContext.Current.Session[_key];
}
protected override LifetimeManager OnCreateLifetimeManager()
{
return new SessionLifetimeManager();
}
}
This works fine as long as only one user is logged in at a time. The data is fetched properly, the dashboards work as expected, and everything's just peachy keen.
Then, as soon as a second user logs in, disaster strikes.
The last user to have prompted a call to RegisterUserDataAccess seems to always have "priority"; their data is displayed on the dashboard, and nothing else. Whether this is initiated by a log-in, or through a database access selection in my web application that calls the same method to re-route the user's connection to another database they have permission to access, the last one to draw always imposes their data on all other users of the web application. If I understand correctly, this is a problem the SessionLifetimeManager was supposed to solve - unfortunately, I really can't seem to get it to work.
I sincerely doubt that a simple and common use-case like this - multiple users logged into an MVC application who each are supposed to access their own, separate data - is beyond the abilities of Unity, so obviously, I must be doing something very wrong here. Having spent most of my day searching through depths of the internet I wasn't even sure truly existed, I must, unfortunately, now realize that I am at a total and utter loss here.
Has anyone dealt with this issue before? Has anyone dealt with this use-case before, and if yes, can anyone tell me how to change my approach to make this a little less headache-inducing? I am utterly desperate at this point and am considering rewriting my entire data access methodology just to make it work - not the healthiest mindset for clean and maintainable code.
Many thanks.
the issue seems to originate from your registration call, when registering the same type multiple times with unity, the last registration call wins, in this case, that will be data access object for whoever user logs-in last. Unity will take that as the default registration, and will create instances that have the connection to that user's database.
The SessionLifetimeManager is there to make sure you get only one instance of the objects you resolve under one session.
One option to solve this is to use named registration syntax to register the data-access types under a key that maps to the logged-in user (could be the database name), and on the resolve side, retrieve this user key, and use it resolve the corresponding data access implementation for the user
Thank you, Mohammed. Your answer has put me on the right track - I ended up finally solving this using a RepositoryFactory which is instantiated in an InjectionFactory during registration and returns a repository that always wraps around a ClientContext pointing to the currently logged on user's currently selected database.
// DataAccessDependencies.cs
protected override void Initialize()
{
IConfigurationBuilder configurationBuilder = Container.Resolve<IConfigurationBuilder>();
Container.RegisterType<IRepository<ClientContext>>(new InjectionFactory(c => {
ClientRepositoryFactory repositoryFactory = new ClientRepositoryFactory(configurationBuilder);
return repositoryFactory.GetRepository();
}));
}
// ClientRepositoryFactory.cs
public class ClientRepositoryFactory : IRepositoryFactory<RepositoryService<ClientContext>>
{
private readonly IConfigurationBuilder _configurationBuilder;
public ClientRepositoryFactory(IConfigurationBuilder configurationBuilder)
{
_configurationBuilder = configurationBuilder;
}
public RepositoryService<ClientContext> GetRepository()
{
var connectionString = _configurationBuilder.GetConnectionString(UserData.Current.CurrentPermission);
ClientContext ctx = new ClientContext(connectionString);
RepositoryService<ClientContext> repository = new RepositoryService<ClientContext>(ctx);
return repository;
}
}
// UserData.cs (multiton-singleton-hybrid)
public static UserData Current
{
get
{
var currentAADUID = (string)(HttpContext.Current.Session["currentAADUID"]);
return Get(currentAADUID);
}
}
public static UserData Get(string AADUID)
{
UserData instance;
lock(_instances)
{
if(!_instances.TryGetValue(AADUID, out instance))
{
throw new UserDataNotInitializedException();
}
}
return instance;
}
public static UserData Current
{
get
{
var currentAADUID = (string)(HttpContext.Current.Session["currentAADUID"]);
return Get(currentAADUID);
}
}
public static UserData Get(string AADUID)
{
UserData instance;
lock(_instances)
{
if(!_instances.TryGetValue(AADUID, out instance))
{
throw new UserDataNotInitializedException();
}
}
return instance;
}

Quartz.net and Ninject: how to bind implementation to my job using NInject

I am actually working in an ASP.Net MVC 4 web application where we are using NInject for dependency injection. We are also using UnitOfWork and Repositories based on Entity framework.
We would like to use Quartz.net in our application to start some custom job periodically. I would like that NInject bind automatically the services that we need in our job.
It could be something like this:
public class DispatchingJob : IJob
{
private readonly IDispatchingManagementService _dispatchingManagementService;
public DispatchingJob(IDispatchingManagementService dispatchingManagementService )
{
_dispatchingManagementService = dispatchingManagementService ;
}
public void Execute(IJobExecutionContext context)
{
LogManager.Instance.Info(string.Format("Dispatching job started at: {0}", DateTime.Now));
_dispatchingManagementService.DispatchAtomicChecks();
LogManager.Instance.Info(string.Format("Dispatching job ended at: {0}", DateTime.Now));
}
}
So far, in our NInjectWebCommon binding is configured like this (using request scope):
kernel.Bind<IDispatchingManagementService>().To<DispatchingManagementService>();
Is it possible to inject the correct implementation into our custom job using NInject ? and how to do it ? I have read already few posts on stack overflow, however i need some advises and some example using NInject.
Use a JobFactory in your Quartz schedule, and resolve your job instance there.
So, in your NInject config set up the job (I'm guessing at the correct NInject syntax here)
// Assuming you only have one IJob
kernel.Bind<IJob>().To<DispatchingJob>();
Then, create a JobFactory: [edit: this is a modified version of #BatteryBackupUnit's answer here]
public class NInjectJobFactory : IJobFactory
{
private readonly IResolutionRoot resolutionRoot;
public NinjectJobFactory(IResolutionRoot resolutionRoot)
{
this.resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
// If you have multiple jobs, specify the name as
// bundle.JobDetail.JobType.Name, or pass the type, whatever
// NInject wants..
return (IJob)this.resolutionRoot.Get<IJob>();
}
public void ReturnJob(IJob job)
{
this.resolutionRoot.Release(job);
}
}
Then, when you create the scheduler, assign the JobFactory to it:
private IScheduler GetSchedule(IResolutionRoot root)
{
var schedule = new StdSchedulerFactory().GetScheduler();
schedule.JobFactory = new NInjectJobFactory(root);
return schedule;
}
Quartz will then use the JobFactory to create the job, and NInject will resolve the dependencies for you.
Regarding scoping of the IUnitOfWork, as per a comment of the answer i linked, you can do
// default for web requests
Bind<IUnitOfWork>().To<UnitOfWork>()
.InRequestScope();
// fall back to `InCallScope()` when there's no web request.
Bind<IUnitOfWork>().To<UnitOfWork>()
.When(x => HttpContext.Current == null)
.InCallScope();
There's only one caveat that you should be aware of:
With incorrect usage of async in a web request, you may mistakenly be resolving a IUnitOfWork in a worker thread where HttpContext.Current is null. Now without the fallback binding, this would fail with an exception which would show you that you've done something wrong. With the fallback binding however, the issue may present itself in an obscured way. That is, it may work sometimes, but sometimes not. This is because there will be two (or even more) IUnitOfWork instances for the same request.
To remedy this, we can make the binding more specific. For this, we need some parameter to tell us to use another than InRequestScope(). Have a look at:
public class NonRequestScopedParameter : Ninject.Parameters.IParameter
{
public bool Equals(IParameter other)
{
if (other == null)
{
return false;
}
return other is NonRequestScopedParameter;
}
public object GetValue(IContext context, ITarget target)
{
throw new NotSupportedException("this parameter does not provide a value");
}
public string Name
{
get { return typeof(NonRequestScopedParameter).Name; }
}
// this is very important
public bool ShouldInherit
{
get { return true; }
}
}
now adapt the job factory as follows:
public class NInjectJobFactory : IJobFactory
{
private readonly IResolutionRoot resolutionRoot;
public NinjectJobFactory(IResolutionRoot resolutionRoot)
{
this.resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
return (IJob) this.resolutionRoot.Get(
bundle.JobDetail.JobType,
new NonrequestScopedParameter()); // parameter goes here
}
public void ReturnJob(IJob job)
{
this.resolutionRoot.Release(job);
}
}
and adapt the IUnitOfWork bindings:
Bind<IUnitOfWork>().To<UnitOfWork>()
.InRequestScope();
Bind<IUnitOfWork>().To<UnitOfWork>()
.When(x => x.Parameters.OfType<NonRequestScopedParameter>().Any())
.InCallScope();
This way, if you use async wrong, there'll still be an exception, but IUnitOfWork scoping will still work for quartz tasks.
For any users that could be interested, here is the solution that finally worked for me.
I have made it working doing some adjustment to match my project. Please note that in the method NewJob, I have replaced the call to Kernel.Get by _resolutionRoot.Get.
As you can find here:
public class JobFactory : IJobFactory
{
private readonly IResolutionRoot _resolutionRoot;
public JobFactory(IResolutionRoot resolutionRoot)
{
this._resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
try
{
return (IJob)_resolutionRoot.Get(
bundle.JobDetail.JobType, new NonRequestScopedParameter()); // parameter goes here
}
catch (Exception ex)
{
LogManager.Instance.Info(string.Format("Exception raised in JobFactory"));
}
}
public void ReturnJob(IJob job)
{
}
}
And here is the call schedule my job:
public static void RegisterScheduler(IKernel kernel)
{
try
{
var scheduler = new StdSchedulerFactory().GetScheduler();
scheduler.JobFactory = new JobFactory(kernel);
....
}
}
Thank you very much for your help
Thanks so much for your response. I have implemented something like that and the binding is working :):
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
var resolver = DependencyResolver.Current;
var myJob = (IJob)resolver.GetService(typeof(IJob));
return myJob;
}
As I told before I am using in my project a service and unit of work (based on EF) that are both injected with NInject.
public class DispatchingManagementService : IDispatchingManagementService
{
private readonly IUnitOfWork _unitOfWork;
public DispatchingManagementService(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
}
}
Please find here how I am binding the implementations:
kernel.Bind<IUnitOfWork>().To<EfUnitOfWork>()
kernel.Bind<IDispatchingManagementService>().To<DispatchingManagementService>();
kernel.Bind<IJob>().To<DispatchingJob>();
To resume, the binding of IUnitOfWork is done for:
- Eevery time a new request is coming to my application ASP.Net MVC: Request scope
- Every time I am running the job: InCallScope
What are the best practices according to the behavior of EF ? I have find information to use CallInScope. Is it possible to tell NInject to get a scope ByRequest everytime a new request is coming to the application, and a InCallScope everytime my job is running ? How to do that ?
Thank you very much for your help

What is the correct way to store client variables in SignalR?

Currently, I am using a Dictionary and Context.User.Identity.Name (code condensed for brevity):
[Authorize]
public class ServiceHub : Hub
{
static private Dictionary<string, HubUserProcess> UserProcesses = new Dictionary<string, HubUserProcess>();
public override Task OnConnected()
{
UserProcesses[Context.User.Identity.Name] = new HubUserProcess();
return base.OnConnected();
}
public override Task OnDisconnected()
{
// ... Remove from dictionary if key exists (not shown) ...
return base.OnConnected();
}
// Then I use UserProcesses[Context.User.Identity.Name] in all functions
}
In my HubUserProcess class, I have a bunch of web services that initialize in the constructor using the Context.User.Identity.Name. A coworker said that my approach is unsafe, so my biggest worry is one user accessing another user's private data (these variables can hold very sensitive information). What is the correct/safe way to store client variables?

Resources