I need to write a client–server solution. The server will perform scheduled operations and also serve up data from a SQL DB to the client.
The client is yet to be fully defined but it will make requsts to the server and display data for the user and pass data back for persistence.
The whole solution is dealing with entities (Users, Products, etc. with their associated attributes).
In my head, both the server and the client need to be aware of these entities in order for them to be efficiently manipulated in code rather than having to unpack JSON and duplicate code.
My question is, should I make a class library containing models (classes or structs) representing these entities that is referenced by both the client- and server-side projects?
Otherwise, is there some standard way of building such a solution?
Thus far I have a client, a server (based on ASP.NET 2) and a Class Library containing entity Models along with some data access logic. Both the client and server projects reference the Class Library. One day in and I’m already starting to doubt my approach as being too clumsy.
I will be working with VS2019 using C#.
This isn't really a question well suited to StackOverflow which aims to solve specific code/tech problems.
It is possible to use the same model (Entity) in both client and server, but I highly recommend separating the client model (view model) from the domain model. (Entity) The reasons for this are:
Clients rarely need, or should expose every domain field & relationship. Sending models from server to client involve serialization. This can result in either performance issues or errors as the serializer "touches" properties and wants to lazy-load them, or you add the cost of eager-loading everything; Or it results in incomplete models where unloaded relationships are left null. (Not because there aren't any, they just weren't loaded) Client models should be trimmed down to just the data the client needs to see, formatted in a way it can use. Ultimately this is shipping more data than needed to the client and back. Keep the payloads over the wire as small as possible.
Security can be an issue when passing entities from Client to Server. Your UI may only allow users to change a few values in a very particular way, but the temptation can be to take that entity, attach it to a DB Context and update it. (1 line updates) However, this entity sent from the client can very easily be tampered with by the browser which can result in changes being made that you don't expect/allow. (I.e. change a FK relationship)
At best this can allow stale data overwrites where changes made after that record was sent to the client are overwritten silently when the client gets around to submitting their change. Don't trust data coming from a client, especially under the premise of "saving time". Update requests should validate the data coming in and re-load the Entity to check things like the row version before updating allowed values.
Enabling view models can be done using a technique supported in EF called Projection. This can either be hand-written using .Select or leveraging tools like Automapper and its ProjectTo method to easily transform entities and Linq expressions into simple, dumb serializable view models. When a view model comes back to the server, you simply load an entity and associations from the DB by ID, and update values after validation steps and SaveChanges to persist.
Related
For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.
At the risk of revealing my ignorance I confess that I am confused about the purpose of the lock_id and locked columns in the custom ASP.NET session store example from Microsoft. I get that this schema is designed for consumption in a multi-threaded environment and by many applications, so it makes sense that the PK includes the session identifier as well as the application identifier, allowing applications to re-use session identifiers. What doesn't make sense is the fact that the lock_id does not appear to reference a foreign key constraint.
Since Microsoft didn't include much information about the nature and reasons for the lock_id I am led to assume that it is obvious. Intuitively, it makes sense that it would be useful to indicate whether a session is being handled by, say, a particular application server at a given time, but I don't see how this physically translates into the schema.
Any clarification is appreciated.
FWIW, this answer is equal parts guess and knowledge.
guess:
The database schema is used by a class called SessionStateStoreProvider, or one of its subclasses. In particular, it defines a method called:
GetItemExclusive
Now, I don't know for sure that the parameters it takes in map directly to the columns stored in the schema (and I'm not saying it's aliens, but...).
knowledge:
As for what they are used for, the answer lies in your original question:
I get that this schema is designed for consumption in a multi-threaded
environment and by many applications
Within a single application, there may be multiple threads serving the same session id (consider if you had one asp.net website that saw a single user load two pages simultaneously). The default behavior of .NET Session State is to lock the session id for one of those pages, and assigning a random lock id to it. When that page finished, it would release the lock and the next thread in line would grab it and assign a new lock id. Normally, this would all take place in memory, but if you are persisting the session state model to the database then it makes sense for the lock id to go as well. Support for this behavior is why lock id is required in addition to application id and session id.
I'm learning Meteor and fundamentally enjoy how fast I can build data driven applications however as I went through the Creating Posts chapter in the Discover Meteor book I learned about using server side Methods. Specifically the primary reason (and there are a number of very valid reasons to use these) was because of the timestamp. You wouldn't want to rely on the client date/time, you'd want to use the server date/time.
Makes sense except that in almost every application I've ever built we store date/time of row create/update in a column. Effectively every single create or update to the database records date/time which in Meteor now looks like I would need to use server side Methods to ensure data integrity.
If I'm understanding correctly that pretty much eliminates the ease of use and real-time nature of a client side Collection because I'll need to use Methods for almost every single update and create to our databases.
Just wanted to check and see how everyone else is doing this in the real world. Are you just querying a server side Method that just returns the date/time and then using client side Collection or something else?
Thanks!
The short answer to this question is that yes, every operation that affects the server's database will go through a server-side method. The only difference is whether you are defining this method explicitly or not.
When you are just getting started with Meteor, you will probably do insert/update/remove operations directly on client collections using validators, which check for whether the operation is allowed. This usage is actually calling predefined methods on both the server and client: (for a collection named foo the you have /foo/insert, for example) which simply checks the specified validators before doing the operation. As you become more familiar with Meteor you will probably override these default methods, for reasons you described (among others.)
When using your own methods, you will typically want to define a method both on the server and the client, just as the default collection functions do for you. This is because of Meteor's latency compensation, which allows most client operations to be reflected immediately in the browser without any noticeable lag, as long as they are permitted. Meteor does this by first simulating the effect of a method call in the client, updating the client's cached data temporarily, then sending the actual method call to the server. If the server's method causes a different set of changes than the client's simulation, the client's cache will be updated to reflect this when the server method returns. This also means that if the client's method would have done the same as the server, we've basically allowed for an instant operation from the perspective of the client.
By defining your own methods on the server and client, you can extend this to fill your own needs. For example, if you want to insert timestamps on updates, have the client insert whatever timestamp in the simulation method. The server will insert an authoritative timestamp, which will replace the client's timestamp when the method returns. From the client's perspective, the insert operation will be instant, except for an update to the timestamp if the client's time happens to be way off. (By the way, you may want to check out my timesync package for displaying relative server time accurately on the client.)
A final note: it's good to understand what scope you are doing collection operations in, as this was one of the this that originally confused me about Meteor. For example, if you have a collection instance in the client Foo, Foo.insert() in normal client code will call the default pair of client/server methods. However, Foo.insert() in a client method will run only in a simulation and will never call server code - so you will need to define the same method on the server and make sure you do Foo.insert() there as well, for the method to work properly.
A good rule of thumb for moving forward is to replace groups of validated collection operations with your own methods that do the same operations, and then adding specific extra features on the server and client respectively.
In short— yes!
Publications exist to send out a 'live', and dynamic, subset of the database to the client, sending DDP added messages for existing records, followed by a ready, and then added, changed, and deleted messages to keep the client's cache consistent.
Methods exist to- directly, or indirectly— cause Mongo Updates, and like it was mentioned by Andrew, they are always in use.
But truly, because of Meteor's publication architecture, any edits to collections that are currently being published to at least one client, will be published via DDP - regardless of the source of the change to Mongo - even an outside process.
I'm working on the following scenario:
I have a console up that populates a SQL Server database with some data. I have one more web app that reads the same database and displays the data on a front-end. Both of the applications use Entity Framework to communicate with the database (they have the same connection string).
I wonder how can the web app be notified for any changes that have occurred to the database. Bear in mind that the two applications are not referenced, whatsoever.
Is there event provided by EF that fires when some has changes. In essence, I would like to know when a change has happened, as well as, the nature of that change
I had a similar requirement and I solved it using the EF function:
[context].Database.CompatibleWithModel(throwIfNoMetadata: true)
It will return if your model matches the underlying database structure using the metadata table.
Note that I was using a Code First approach.
The msdn definition is below:
http://msdn.microsoft.com/en-us/library/system.data.entity.database.compatiblewithmodel(v=vs.103).aspx
Edit:
Just found an amazing article with a demonstration:
http://blog.oneunicorn.com/2011/04/08/code-first-what-is-that-edmmetadata-table/
This is not something that is related to EF at all. EF is just a library that makes SQL calls and maps them to objects. It has no inside knowledge of the database. As such, when data changes in one application, another application doesn't know unless they query to see if that data changes (and you're not going to be constantly running queries to know that, it's too impractical).
There are, potentially some ways to do this, such as adding triggers to the database, which then call extended stored procs to send messages to the app, but this is a lot of work to go through, and it can possibly compromise the robustness of the database.
There used to be something called Notification Services, but that was deprecated. There's now something called SqlDependency objects, which may help you in some cases.. but it all depends on what you're trying to do exactly.
In any event, it's usually easier to find a different way to do what you want. This is complex topic, and really requires a lot of sql server knowledge.
Hi I want a sample that does following:
Database <-> Data Access + Cache <-> Business logic <-> UI
so basically everything you want from database should be accessible from cache, if it's not in cache, underlying data access layer will populate if and return it otherwise returned from cache
is there any disadvantage? in what scenerios this could be a good solution
I like creating my own static wrapper class for the System.Web.Caching.Cache class.
Essentially you create a class in your web application module, and create all the standard Cache functions (get, add, remove, etc). The methods need to be implemented with generics to ensure type safety.
Here is a good example
You then create another static class, which acts as like a service model from your web tier through to your data tier.
Your web tier would invoke methods on the static class, which would first generate a CacheKey based on the supplied method parameters, check cache, if found return, otherwise call data layer, add to cache and return.
Depending on how your business objects are setup, your might need to provide deep copies (ie implement IClonable and ovveride the Clone method) on your objects.
Also, your cache solution depends on your web farm architecture. If you have lots of web servers, the chances are your data could become stale so you need to decide on the best option there (SQLCacheDependecy, Distributed Caching, etc).
The obvious disadvantages are cache validity (how do you know that the data was not changed/added since you cached it) and memory/disk usage.
It is a good solution when your data is static (no need to think when to update cache).
We used a similar approach with dynamic data and cache introduced quite a number of problems. Sometimes cache updates were too expensive (the server had to notify all clients about the data which they cached and which has been changed), sometimes memory usage on clients was too high.