What is the correct way to store client variables in SignalR? - signalr

Currently, I am using a Dictionary and Context.User.Identity.Name (code condensed for brevity):
[Authorize]
public class ServiceHub : Hub
{
static private Dictionary<string, HubUserProcess> UserProcesses = new Dictionary<string, HubUserProcess>();
public override Task OnConnected()
{
UserProcesses[Context.User.Identity.Name] = new HubUserProcess();
return base.OnConnected();
}
public override Task OnDisconnected()
{
// ... Remove from dictionary if key exists (not shown) ...
return base.OnConnected();
}
// Then I use UserProcesses[Context.User.Identity.Name] in all functions
}
In my HubUserProcess class, I have a bunch of web services that initialize in the constructor using the Context.User.Identity.Name. A coworker said that my approach is unsafe, so my biggest worry is one user accessing another user's private data (these variables can hold very sensitive information). What is the correct/safe way to store client variables?

Related

MAUI+ASP.NET DTOs

I have a project consisting of 2 parts:
ASP.NET API using Entity Framework
.NET MAUI Client App
I use DTOs for comunication from/to the API in order not to expose other properties of my entities. Thanks to this approach I was able to separate Entity data and data that are sent from the API.
At first I used these DTOs also in the MAUI UI. But after some time I started to notice that they contains UI-specific properties, attributes or methods that have no purpose for the API itself, so they are redundant in requests.
EXAMPLE:
1 - API will receive request from MAUI to get exercise based on it's name
2- ExerciseService returns: ExerciseEntity and ExerciseController use AutoMapper to Map ExerciseEntity -> ExerciseDto ommiting ExerciseId field (only admin can see this info in the DB) and returning it in the API response
3 - MAUI receives from the API ExerciseDto. But in the client side it also want to know if data from ExerciseDto are collapsed in the UI. So because of that I add IsCollapsed property into the ExerciseDto. But now this is a redundant property for the API, because I dont want to persist this information in the database.
QUESTIONS:
Should I map these DTOs to new objects on the client side ?
Or how to approach this problem ?
Is there an easier way how to achieve the separation ?
Because having another mapping layer will add extra complexity and a lot of duplicate properties between DTOs and those new client objects.
Normally if you use clean architecture approach your DTOs shoud contain no attributes and other specific data relevant just for some of your projects, to be freely usable by other projects in a form of dependency.
Then you'd have different approaches to consume DTOs in a xamarin/maui application, for example:
APPROACH 1.
Mapping (of course) into a class that is suitable for UI. Here you have some options, use manual mapping, write your own code that uses reflection or use some third party lib using same reflection. Personally using all of them, and when speaking of third party libs Mapster has shown very good to me for api and mobile clients.
APPROACH 2.
Subclass DTO. The basic idea is to deserialize dto into the derived class, then call Init(); if needed. All properties that you manually implemented as new with OnPropertyChanged will update bindings after being popupated by deserializer/mapper and you alse have a backup plan to call RaiseProperties(); for all of the props, even thoses who do not have OnPropertyChanged in place so they can update bindings if any.
Example:
our Api DTO
public class SomeDeviceDTO
{
public int Id { get; set; }
public int Port { get; set; }
public string Name { get; set; }
}
Our derived class for usage in mobile client:
public class SomeDevice : SomeDeviceDTO, IFromDto
{
// we want to be able to change this Name property in run-time and to
// reflect changes so we make it bindable (other props will remain without
// OnPropertyChanged BUT we can always update all bindings in code if needed
// using RaiseProperties();):
private string _name;
public new string Name
{
get { return _name; }
set
{
if (_name != value)
{
_name = value;
OnPropertyChanged();
}
}
}
// ADD any properties you need for UI
// ...
#region IFromDto
public void Init()
{
//put any code you'd want to exec after dto's been imported
// for example to fill any new prop with data derived from what you received
}
public void RaiseProperties()
{
var props = this.GetType().GetProperties();
foreach (var property in props)
{
if (property.CanRead)
{
OnPropertyChanged(property.Name);
}
}
}
public event PropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged([CallerMemberName] string propertyName = "")
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
#endregion
}
public interface IFromDto : INotifyPropertyChanged
{
//
// Summary:
// Can initialize model after it's being loaded from dto
void Init();
//
// Summary:
// Notify all properties were updated
void RaiseProperties();
}
We can get it like: var device = JsonConvert.DeserializeObject<SomeDevice>(jsonOfSomeDeviceDTO);
We then can call Init(); if needed..
Feel free to edit this answer to add more approaches..

TelemetryProcessor - Multiple instances overwrite Custom Properties

I have a very basic http-POST triggered api which creates a TelemetryClient. I needed to provide a custom property in this telemetry for each individual request, so I implemented a TelemtryProcessor.
However, when subsequent POST requests are handled and a new TelemetryClient is created that seems to interfere with the first request. I end up seeing maybe a dozen or so entries in App Insights containing the first customPropertyId, and close to 500 for the second, when in reality the number should be split evenly. It seems as though the creation of the 2nd TelemetryClient somehow interferes with the first.
Basic code is below, if anyone has any insight (no pun intended) as to why this might occur, I would greatly appreciate it.
ApiController which handles the POST request:
public class TestApiController : ApiController
{
public HttpResponseMessage Post([FromBody]RequestInput request)
{
try
{
Task.Run(() => ProcessRequest(request));
return Request.CreateResponse(HttpStatusCode.OK);
}
catch (Exception)
{
return Request.CreateErrorResponse(HttpStatusCode.InternalServerError, Constants.GenericErrorMessage);
}
}
private async void ProcessRequest(RequestInput request)
{
string customPropertyId = request.customPropertyId;
//trace handler creates the TelemetryClient for custom property
CustomTelemetryProcessor handler = new CustomTelemetryProcessor(customPropertyId);
//etc.....
}
}
CustomTelemetryProcessor which creates the TelemetryClient:
public class CustomTelemetryProcessor
{
private readonly string _customPropertyId;
private readonly TelemetryClient _telemetryClient;
public CustomTelemetryProcessor(string customPropertyId)
{
_customPropertyId = customPropertyId;
var builder = TelemetryConfiguration.Active.TelemetryProcessorChainBuilder;
builder.Use((next) => new TelemetryProcessor(next, _customPropertyId));
builder.Build();
_telemetryClient = new TelemetryClient();
}
}
TelemetryProcessor:
public class TelemetryProcessor : ITelemetryProcessor
{
private string CustomPropertyId { get; }
private ITelemetryProcessor Next { get; set; }
// Link processors to each other in a chain.
public TelemetryProcessor(ITelemetryProcessor next, string customPropertyId)
{
CustomPropertyId = customPropertyId;
Next = next;
}
public void Process(ITelemetry item)
{
if (!item.Context.Properties.ContainsKey("CustomPropertyId"))
{
item.Context.Properties.Add("CustomPropertyId", CustomPropertyId);
}
else
{
item.Context.Properties["CustomPropertyId"] = CustomPropertyId;
}
Next.Process(item);
}
}
It's better to avoid creating Telemetry Client per each request, isntead re-use single static Telemetry Client instance. Telemetry Processors and/or Telemetry Initializers should also typically be registered only once for the telemetry pipeline and not for every request. TelemetryConfiguration.Active is static and by adding new Processor with each request the queue of processor only grows.
The appropriate setup would be to add Telemetry Initializer (Telemetry Processors are typically used for filtering and Initializers for data enrichment) once into the telemetry pipeline, e.g. though adding an entry to ApplicationInsights.config file (if present) or via code on TelemetryConfiguration.Active somewhere in global.asax, e.g. Application_Start:
TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer());
Initializers are executed in the same context/thread where Track..(..) was called / telemetry was created, so they will have access to the thread local storage and or local objects to read parameters/values from.

ASP.NET, thread and connection-string stored in Session

My application can connect with multiple data bases (every data base have the same schema), I store the current DB, selected by user, in Session and encapsule access using a static property like:
public class DataBase
{
public static string CurrentDB
{
get
{
return HttpContext.Current.Session["CurrentDB"].ToString();
}
set
{
HttpContext.Current.Session["CurrentDB"] = value;
}
}
}
Other pieces of code access the static CurrentDB to determine what DB use.
Some actions start background process in a thread and it need access the CurrentDB to do some stuff. I'm thinking using something like this:
[ThreadStatic]
private static string _threadSafeCurrentDB;
public static string CurrentDB
{
get
{
if (HttpContext.Current == null)
return _threadSafeCurrentDB;
return HttpContext.Current.Session["CurrentDB"].ToString();
}
set
{
if (HttpContext.Current == null)
_threadSafeCurrentDB = value;
else
HttpContext.Current.Session["CurrentDB"] = value;
}
}
And start thread like:
public class MyThread
{
private string _currentDB;
private thread _thread;
public MyThread (string currentDB)
{
_currentDB = currentDB;
_thread = new Thread(DoWork);
}
public DoWork ()
{
DataBase.CurrentDB = _currentDB;
... //Do the work
}
}
This is a bad practice?
Actually, I think you should be able to determine which thread uses which database, so I would create a class inherited from Thread, but aware of the database it uses. It should have a getDB() method, so, if you need a new Thread which will use the same database as used in another specific Thread, you can use it. You should be able to setDB(db) of a Thread as well.
In the session you are using a current DB approach, which assumes that there is a single current DB. If this assumption describes the truth, then you can leave it as it is and update it whenever a new current DB is being used. If you have to use several databases in the same time, then you might want to have a Dictionary of databases, where the Value would be the DB and the Key would be some kind of code which would have a sematic meaning which you could use to be able to determine which instance is needed where.

Signalr & Nancyfx integration

My app flow is as follows (simplified for clarity):
User GETs a page from "/page1"
User performs actions on the page (adds text, clicks, etc..), while Signalr communicates this data to the server, which performs heavy calculations in the background, and the results of those are returned to the page (lets call those "X").
When the user is finished with the page, he clicks a link to "/page2", that is returned by Nancy. This page is built using a Model that is dependent on X.
So, how do I build that Model based on X? How can signalr write to the user session in a way that Nancy can pick up on?
(I'm looking for a "clean" way)
Pending formal integration of Signalr & Nancy, this is what I came with. Basically, I share an IOC container between the two, and use an object (singleton lifetime) that maps users to state.
How to share an IOC container using the built in TinyIOC:
Extend Signalr's DefaultDependencyResolver
public class TinyIoCDependencyResolver : DefaultDependencyResolver
{
private readonly TinyIoCContainer m_Container;
public TinyIoCDependencyResolver(TinyIoCContainer container)
{
m_Container = container;
}
public override object GetService(Type serviceType)
{
return m_Container.CanResolve(serviceType) ? m_Container.Resolve(serviceType) : base.GetService(serviceType);
}
public override IEnumerable<object> GetServices(Type serviceType)
{
var objects = m_Container.CanResolve(serviceType) ? m_Container.ResolveAll(serviceType) : new object[] { };
return objects.Concat(base.GetServices(serviceType));
}
}
Replace Signalr's default DependencyResolver with our new one
public class Bootstrapper : DefaultNancyBootstrapper
{
protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines)
{
CookieBasedSessions.Enable(pipelines);
// Replace UserToStateMap with your class of choice
container.Register<IUserToStateMap, UserToStateMap>();
GlobalHost.DependencyResolver = new TinyIoCDependencyResolver(container);
RouteTable.Routes.MapHubs();
}
}
Add IUserToStateMap as a dependency in your hubs and Nancy modules
public class MyModule : NancyModule
{
public MyModule(IUserToStateMap userToStateMap)
{
Get["/"] = o =>
{
var userId = Session["userId"];
var state = userToStateMap[userId];
return state.Foo;
};
}
}
public class MyHub : Hub
{
private readonly IUserToStateMap m_UserToStateMap;
public MyHub(IUserToStateMap userToStateMap)
{
m_UserToStateMap = userToStateMap;
}
public string MySignalrMethod(string userId)
{
var state = userToStateMap[userId];
return state.Bar;
}
}
What I would really want, is a way to easily share state between the two based on the connection ID or something like that, but in the meantime this solution works for me.
Did you arrive hear looking for a simple example of how to integrate Nancy and SignalR? I know I did.
Try this question instead (I self-answered it).
SignalR plus NancyFX : A simple but well worked example

Factory pattern to "set" product in addition to "get" product?

I am new to design patterns. I am looking for a pattern similar to the factory pattern but that will also let me "set" the product. Something like this:
class VehicleFactory
{
IVehicle static GetVehicle();
void static SetVehicle(IVehicle vehicle);
}
Is there any known pattern similar to this? Thank you.
EDIT: I am looking to store "POCO" objects in the Session object and use a class to set/ get them. I may want to switch persistance to ViewState/ database in the future. This is what I have:
// object to persist in Session.
class Vehicle
{
string Make { get; set; }
string Model { get; set; }
}
// class to set/ get object from Session.
// Please see VehicleFactory above.
Factory pattern is a creational design pattern with encapsulates the creation of a complex object and isolates the creation process from your business logic.
Here it looks like you want a cache to store and manage the Vehicle instances. I would recommend that you call this class a VehicleCache rather than Factory and implement like a cache.
Firstly you should consider having and identifier for the Vehicle object like a vehicleId. I would also recommend that you implement it as an entity object as described in Domain Driven Design.
Then you can implement your cache like this -
public class VehicleCache
{
public void Add(IVehicle instanceToAdd)
{
// Store instance in session object
}
public IVehicle Get(string id)
{
// search and return vehicle from cache
}
// more methods and indexer if required
}
Here are some links that explain how you can implement such cache for your application in a thread safe fashion -
https://blogs.infosupport.com/blogs/frankb/archive/2008/12/31/Implementing-a-Thread-Safe-cache-using-the-Parallel-Extensions.aspx
http://www.objectreference.net/post/Implementing-Generic-Caching.aspx
Like Unmesh says, the name is kind of misleading.
You simply want to cache IVehicle object.
IVehicleCache
{
IVehicle GetVehicle();
void SetVehicle(IVehicle vehicle);
}
// Implementation for http session
public class HttpSessionVehicleCache : IVehicleCache
{
public IVehicle GetVehicle()
{
return (IVehicle)HttpContext.Current.Session["Vehicle"];
}
void SetVehicle(IVehicle vehicle)
{
HttpContext.Current.Session["Vehicle"] = vehicle;
}
}

Resources