Should I use ASP.NET Profiles here? - asp.net

I'm working on a rewrite of a website that has to pull its user data data from a third-party system accessed via remote objects. In the interest of standardization, I've implemented a custom MembershipProvider for authentication and a custom RoleProvider for authorization, but I'm now pondering the best way to deal with profile data. Most of the advice I've seen says to use the ProfileProvider model to deal with profile data if at all possible, but its design really seems not to mesh well with the system I have to interface with.
The biggest sticking point is the ProfileProvider's insistence that a Profile is a bag of objects that is operated on as a single entity. That may fine for a SQL-based provider, but calling the remote objects is a very expensive operation. If I make a call to Profile.FirstName, GetPropertyValues looks like it's sent a SettingsPropertyCollection containing every object defined in the profile. For this website, that will include multiple sets of address information, order information, attendance information, and other things besides. Pulling all of that information at once is murderous on performance. It does look like targeted saves can be done through use of the IsDirty flag on each SettingsPropertyValue object, but only if it's a primitive type... which most of the properties aren't.
Am I understanding this correctly? If all the ProfileProvider provides is "Return this profile" or "Save this profile" then lazy loading seems impossible, and the performance hit is too great. If I ditch the profile model, what's a good alternative method for dealing with all the profile data? Should I just roll my own session-backed mechanism?

Related

How to structure a Client–Server data Model solution?

I need to write a client–server solution. The server will perform scheduled operations and also serve up data from a SQL DB to the client.
The client is yet to be fully defined but it will make requsts to the server and display data for the user and pass data back for persistence.
The whole solution is dealing with entities (Users, Products, etc. with their associated attributes).
In my head, both the server and the client need to be aware of these entities in order for them to be efficiently manipulated in code rather than having to unpack JSON and duplicate code.
My question is, should I make a class library containing models (classes or structs) representing these entities that is referenced by both the client- and server-side projects?
Otherwise, is there some standard way of building such a solution?
Thus far I have a client, a server (based on ASP.NET 2) and a Class Library containing entity Models along with some data access logic. Both the client and server projects reference the Class Library. One day in and I’m already starting to doubt my approach as being too clumsy.
I will be working with VS2019 using C#.
This isn't really a question well suited to StackOverflow which aims to solve specific code/tech problems.
It is possible to use the same model (Entity) in both client and server, but I highly recommend separating the client model (view model) from the domain model. (Entity) The reasons for this are:
Clients rarely need, or should expose every domain field & relationship. Sending models from server to client involve serialization. This can result in either performance issues or errors as the serializer "touches" properties and wants to lazy-load them, or you add the cost of eager-loading everything; Or it results in incomplete models where unloaded relationships are left null. (Not because there aren't any, they just weren't loaded) Client models should be trimmed down to just the data the client needs to see, formatted in a way it can use. Ultimately this is shipping more data than needed to the client and back. Keep the payloads over the wire as small as possible.
Security can be an issue when passing entities from Client to Server. Your UI may only allow users to change a few values in a very particular way, but the temptation can be to take that entity, attach it to a DB Context and update it. (1 line updates) However, this entity sent from the client can very easily be tampered with by the browser which can result in changes being made that you don't expect/allow. (I.e. change a FK relationship)
At best this can allow stale data overwrites where changes made after that record was sent to the client are overwritten silently when the client gets around to submitting their change. Don't trust data coming from a client, especially under the premise of "saving time". Update requests should validate the data coming in and re-load the Entity to check things like the row version before updating allowed values.
Enabling view models can be done using a technique supported in EF called Projection. This can either be hand-written using .Select or leveraging tools like Automapper and its ProjectTo method to easily transform entities and Linq expressions into simple, dumb serializable view models. When a view model comes back to the server, you simply load an entity and associations from the DB by ID, and update values after validation steps and SaveChanges to persist.

What is the best way to store temporary user information in asp.net?

When user logs in there are number of attributes I need to retrieve from ActiveDirectory such as their real name, some contacts etc. Some of these fields I will be showing quite often in some forms. ActiveDirectory retrieval speed is pretty bad in my case, so I was wondering what would be the best way to store this information in memory when they log in, and then delete it once they log off/timeout?
My thoughts so far:
1) Store in Session, but is it safe?
2) Extend the User.Identity and store it there. Not sure that's possible.
3) Store it in some kind of global Dictionary. How would I know that they logged off to remove the key/value pair?
I am using MVC2 for this project and I will not need to write back to ActiveDirectory.
Session objects are quite commonly used for storing information. If you are worried about security, you can use HTTPS for communication or you can make use of State Servers or SQL Server for storing that information.
Yes as it is said you can use profile provider to store the data.
I personally use session data.
my view why:
1, Database solution via adding field into profile (as additionalUserData = data in here)
Make it simple to use and does not take long to implement as session handling.
2, session data are always on server and there for you do not need to care about data in database, and works without it. (few project has been done this way)
it is reasonably save and easy to access and you can add expiry date time. Also you can keep more information about the user.
3th option:
You would have to have timestamp and checking every request which has expired and remove this records in which case you would have to have dictionary for every user...
The problem is yours....
Hope it helps
I would put it in session as long as it's not a large amount of data. Seems like it's a perfect fit for session state. There shouldn't be any safety concerns if you use https.
If you're concerned about the amount of data you'd be putting in session state you could also consider using the ASP.NET Profile Provider but you'd need to have some type of mechanism to keep the data synched with AD (maybe each time the user logs in). That being said, if it's not a huge amount of data I think session is the way to go.

ASP.net application session cache best practices and patterns

In asp.net the major data stores are application, session and we also have the object cache.
I have used common sense hints/tips (e.g. never put users specific data in application, never put unmanaged resources in session etc. etc.) but to be honest I have never come across any recommendations and examples for when to use what in MSDN or from prominent figures like Haack and the Gu that cover all three together (e.g. Google's first hit to MSDN talks about using application as a global cache, if that's the case, what's the object cache for ?
Also something that I find seldom discussed is comparison in scenario, for example I know its easy to unnecessary load up memory usage with over use of session, but what happens if you used the object cache as an alternative to store the same data ?
Edit: This is the best information I have found so far: http://msdn.microsoft.com/en-us/library/ff647787.aspx
Use Session to store user-specific information, since the framework automatically associates each session store with a specific user.
Use the Object Cache for information that can be cached once and reused across the entire application or across a set of users. If you store user-specific data in the Object Cache then you'll have to invent some mechanism to associate cache entries. Not only would this require extra work on your behalf, but you might do it in such a way that increases the likelihood of a nefarious user somehow doing something akin to session spoofing.
I don't know when you'd ever need to use the Application object. If I'm not mistaken, the Application object is more of a relic from classic ASP than anything else.
Another form of caching that can be just as important is per-request caching via the HttpContext.Items collection. This allows you to cache data for the lifetime of a request and is useful if you keep requesting the same data during a single request (such as from different User Controls on the page). For more information on this approach, see HttpContext.Items - a Per-Request Cache Store.
I'd suggest creating a wrapper class, at least for the session, if those get used throughout your code. That way, you can inject an instance of the class to do the real work, and use a mocked version for unit tests. I did this for a large project where the session was widely used, and it worked out rather well.
You can combine this with the facade pattern - the wrapper will provide specific methods that you needs, instead of exposing the general interface. As an example, the session takes objects and returns objects, it is not strongly typed. The wrapper can have strongly typed add and get methods.

Scalable/Reusable Authorization Model

Ok, so I'm looking for a bit of architecture guidance, my team is getting a chance to re-cast certain decisions with a new feature that we're building, and I wanted to see what SO thought :-) There are of course certain things that we're not changing, so the solution would have to fit in this model. Namely, that we've got an ASP.NET application, which uses web services to allow users to perform actions on the system.
The problem comes in because, as with many systems, different users need access to different functions. Some roles have access to Y button, and others have access to Y and B button, while another still only has access to B. Most of the time that I see this, developers just put in a mish-mosh of if statements to deal with the UI state. My fear is that left unchecked, this will become an unmaintainable mess, because in addition to putting authorization logic in the GUI, it needs to be put in the web services (which are called via ajax) to ensure that only authorized users call certain methods.
so my question to you is, how can a system be designed to decrease the random ad-hoc if statements here and there that check for specific roles, which could be re-used in both GUI/webform code, and web service code.
Just for clarity, this is an ASP.NET web application, using webforms, and Script# for the AJAX functionality. Don't let the script# throw you off of answering, it's not fundamentally different than asp.net ajax :-)
Moving from the traditional group, role, or operation-level permission, there is a push to "claims-based" authorization, like what was delivered with WCF.
Zermatt is the codename for the Microsoft class-library that will help developers build claims-based applications on the server and client. Active Directory will become one of the STS an application would be able to authorize against concurrently with your own as well as other industry-standard servers...
In Code Complete (p. 411) Steve McConnell gives the following advice (which Bill Gates reads as a bedtime story in the Microsoft commercial).
"used in appropriate circumstances, table driven code is simpler than complicated logic, easier to modify, and more efficient."
"You can use a table to describe logic that's too dynamic to represent in code."
"The table-driven approach is more economical than the previous approach [rote object oriented design]"
Using a table based approach you can easily add new "users"(as in the modeling idea of a user/agent along with it's actions). Its a good way to avoid many "if"s. And I've used it before for situations like yours, and it's kept the code nice and tidy.

Is it wrong to switch client logic in the service tier?

We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!

Resources