Entity Framework listening to SQL Server changes - asp.net

I'm working on the following scenario:
I have a console up that populates a SQL Server database with some data. I have one more web app that reads the same database and displays the data on a front-end. Both of the applications use Entity Framework to communicate with the database (they have the same connection string).
I wonder how can the web app be notified for any changes that have occurred to the database. Bear in mind that the two applications are not referenced, whatsoever.
Is there event provided by EF that fires when some has changes. In essence, I would like to know when a change has happened, as well as, the nature of that change

I had a similar requirement and I solved it using the EF function:
[context].Database.CompatibleWithModel(throwIfNoMetadata: true)
It will return if your model matches the underlying database structure using the metadata table.
Note that I was using a Code First approach.
The msdn definition is below:
http://msdn.microsoft.com/en-us/library/system.data.entity.database.compatiblewithmodel(v=vs.103).aspx
Edit:
Just found an amazing article with a demonstration:
http://blog.oneunicorn.com/2011/04/08/code-first-what-is-that-edmmetadata-table/

This is not something that is related to EF at all. EF is just a library that makes SQL calls and maps them to objects. It has no inside knowledge of the database. As such, when data changes in one application, another application doesn't know unless they query to see if that data changes (and you're not going to be constantly running queries to know that, it's too impractical).
There are, potentially some ways to do this, such as adding triggers to the database, which then call extended stored procs to send messages to the app, but this is a lot of work to go through, and it can possibly compromise the robustness of the database.
There used to be something called Notification Services, but that was deprecated. There's now something called SqlDependency objects, which may help you in some cases.. but it all depends on what you're trying to do exactly.
In any event, it's usually easier to find a different way to do what you want. This is complex topic, and really requires a lot of sql server knowledge.

Related

How to structure a Client–Server data Model solution?

I need to write a client–server solution. The server will perform scheduled operations and also serve up data from a SQL DB to the client.
The client is yet to be fully defined but it will make requsts to the server and display data for the user and pass data back for persistence.
The whole solution is dealing with entities (Users, Products, etc. with their associated attributes).
In my head, both the server and the client need to be aware of these entities in order for them to be efficiently manipulated in code rather than having to unpack JSON and duplicate code.
My question is, should I make a class library containing models (classes or structs) representing these entities that is referenced by both the client- and server-side projects?
Otherwise, is there some standard way of building such a solution?
Thus far I have a client, a server (based on ASP.NET 2) and a Class Library containing entity Models along with some data access logic. Both the client and server projects reference the Class Library. One day in and I’m already starting to doubt my approach as being too clumsy.
I will be working with VS2019 using C#.
This isn't really a question well suited to StackOverflow which aims to solve specific code/tech problems.
It is possible to use the same model (Entity) in both client and server, but I highly recommend separating the client model (view model) from the domain model. (Entity) The reasons for this are:
Clients rarely need, or should expose every domain field & relationship. Sending models from server to client involve serialization. This can result in either performance issues or errors as the serializer "touches" properties and wants to lazy-load them, or you add the cost of eager-loading everything; Or it results in incomplete models where unloaded relationships are left null. (Not because there aren't any, they just weren't loaded) Client models should be trimmed down to just the data the client needs to see, formatted in a way it can use. Ultimately this is shipping more data than needed to the client and back. Keep the payloads over the wire as small as possible.
Security can be an issue when passing entities from Client to Server. Your UI may only allow users to change a few values in a very particular way, but the temptation can be to take that entity, attach it to a DB Context and update it. (1 line updates) However, this entity sent from the client can very easily be tampered with by the browser which can result in changes being made that you don't expect/allow. (I.e. change a FK relationship)
At best this can allow stale data overwrites where changes made after that record was sent to the client are overwritten silently when the client gets around to submitting their change. Don't trust data coming from a client, especially under the premise of "saving time". Update requests should validate the data coming in and re-load the Entity to check things like the row version before updating allowed values.
Enabling view models can be done using a technique supported in EF called Projection. This can either be hand-written using .Select or leveraging tools like Automapper and its ProjectTo method to easily transform entities and Linq expressions into simple, dumb serializable view models. When a view model comes back to the server, you simply load an entity and associations from the DB by ID, and update values after validation steps and SaveChanges to persist.

How to best architect website when each client has own database and subdomain?

For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.

How to implement synchronized Memcached with database

AFAIK, Memcached does not support synchronization with database (at least SQL Server and Oracle). We are planning to use Memcached (it is free) with our OLTP database.
In some business processes we do some heavy validations which requires lot of data from database, we can not keep static copy of these data as we don't know whether the data has been modified so we fetch the data every time which slows the process down.
One possible solution could be
Write triggers on database to create/update prefixed-postfixed (table-PK1-PK2-PK3-column) files on change of records
Monitor this change of file using FileSystemWatcher and expire the key (table-PK1-PK2-PK3-column) to get updated data
Problem: There would be around 100,000 users using any combination of data for 10 hours. So we will end up having a lot of files e.g. categ1-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-78-data250, categ2-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-33-data100, etc.
I am expecting 5 million files at least. Now it looks a pathetic solution :(
Other possibilities are
call a web service asynchronously from the trigger passing the key
to be expired
call an exe from trigger without waiting it to finish and then this
exe would expire the key. (I have got some success with this approach on SQL Server using xp_cmdsell to call an exe, calling an exe from oracle's trigger looks a bit difficult)
Still sounds pathetic, isn't it?
Any intelligent suggestions please
It's not clear (to me) if the use of Memcached is mandatory or not. I would personally avoid it and use instead SqlDependency and OracleDependency. The two both allow to pass a db command and get notified when the data that the command would return changes.
If Memcached is mandatory you can still use this two classes to trigger the invalidation.
MS SQL Server has "Change Tracking" features that maybe be of use to you. You enable the database for change tracking and configure which tables you wish to track. SQL Server then creates change records on every update, insert, delete on a table and then lets you query for changes to records that have been made since the last time you checked. This is very useful for syncing changes and is more efficient than using triggers. It's also easier to manage than making your own tracking tables. This has been a feature since SQL Server 2005.
How to: Use SQL Server Change Tracking
Change tracking only captures the primary keys of the tables and let's you query which fields might have been modified. Then you can query the tables join on those keys to get the current data. If you want it to capture the data also you can use Change Capture, but it requires more overhead and at least SQL Server 2008 enterprise edition.
Change Data Capture
I have no experience with Oracle, but i believe it may also have some tracking functionality as well. This article might get you started:
20 Using Oracle Streams to Record Table Changes

How to share a connection between EF DbContext and AspNet Membership to avoid transactions escalating to DTC

I have an ASP.NET MVC3 application that uses an EF 4.1 DbContext, database-first data layer. The EDMX approach works fine as I tend to make changes to my data model before adapting the application to them. The application works fine with the special EF connection string that includes metadata references.
However, there's one fly in the ointment. The application also uses ASP.NET membership and roles which require a standard connection string. I have several use cases that involve both the membership tables and other (EF managed) tables. As the two use separate connection strings, transactions that involve both need DTS to handle them. I don't want to go that route if I can help it, I'd rather all parts of the application simply use the same connection.
Getting EF to run with a plain connection string however is eluding me. Can anyone tell me how it is done, please?
You have several options here. (I know this is long, but please try to read the whole thing). It would help if you can give an actual scenario where you need such a transaction.
First, you are working on a false assumption that if both EF and Membership have the same connection string, it will use a common connection. This may be true sometimes, but is not guaranteed. Connection pooling tries to use the same connection for a given string, but if a connection is already in use it will create a second connection (or reuse an existing second connection already in the pool). So this line of reasoning will get you in trouble at some point.
One of the problems that Membership is designed to solve is to have a pluggable provider interface, so you can swap out membership providers and move to a different one (such as going from Sql to ActiveDirectory), without having to modify your application (or having to modify it much).
More tightly integrating these functions means throwing that benefit away. Maybe that's acceptable, but you should realize that going along these paths essentially tightly couples your data model to the specific Membership provider schemas. A few years ago, that didn't seem like it would be a problem as the membership system hadn't changed in years... but lately, MS and others have been introducing new Membership systems like SimpleMembership and Universal Providers which have different schemas.
So, if we're removing one of the primary features of Membership, why even continue to use it? Well, there are still some benefits from Membership. The primary one being that it provides an out of box full implementation of a user management library, including secure password encryption/hashing and features like question and answer authentication. That's not something to sneeze at, as doing a secure, bug-free membership system from scratch is not trivial (even though it would seem so at first).
So, one option is to implement your own MembershipProvider based on an existing one (like SqlMembershipProvider. Microsoft provides the source for these). Then you can simply override the schema to match whatever you want, but keep all the other features like password encryption and what not. Just fit them into your own schema. That makes them fit your data model a lot better.
However, even if you choose to use the standard membership provider, then there are some things you can do.
First, you can simply map the membership tables into your Entity Framework model. Just drag and drop them onto your designer, or add them in Code First. However, if you do this, you should only use them as read-only, and you should not create foreign key relationships between the membership tables and your tables. Instead, just do manual joins in your EF query (which is more work, but safer) and treat them as stand-alone tables.
Ok, so what about situations where you need to update or delete data from the membership tables as part of a query? Frankly, if you're using the standard membership tables I see almost no reasons this should ever have to happen.
The Membership tables are pretty simple and have very little actual data in them you should need as part of any statements in your app. Unless you're using the Profile provider, which I never do. If you need to map the membership tables, I suggest creating your own table of data rather than using the ProfileProvider.
The only reason I see where you may want to enlist a transaction is when creating a new user. However, since this is a one-time event, then a DT may not be such a terrible thing. However, there may not always be a DTC available to you... so in those cases, the best you can do is use a try-catch block to deal with exceptions.
The alternative is to completely throw away Membership and create your own IPrincipal and IIdentity implementations and simply write your own user management (I would still use the SqlManagementProvider source as a basis for this, however, as it's a good implementation).
Then, since user management is not part of a separate subsystem, you can safely use it for updates and deletes without worrying about what the other subsystem might be doing.
TL;DR
If you can't accept a DT, then either change your workflows, change your code to work with a try-catch-finally statement (though this won't guarantee rollback in case the app code dies suddenly, like a power outage), or use a custom IPrincipal and IIdentity implementation.
I discovered an answer here: https://stackoverflow.com/a/3408209/1169670. Adding "Enlist=false" to the ASP.NET Membership system's connection string stopped the escalation to DTC.
However, this approach simply prevents the membership system enlisting in the transaction. That was sufficient for my requirements, but it may not be in every case.
You should take a look at asp.net universal providers which are EF Codefirst based. the membership schema is exposed as POCO classes and DBSet so you should be able to include the DBSet into a common DBContext class
http://nuget.org/packages/Microsoft.AspNet.providers.core
I don't think you can get EF to use a "Plain" connection string.
In a few applications I have identical normal and EF connection strings sitting side by side

How to implement locking across a server farm?

Are there well-known best practices for synchronizing tasks across a server farm? For example if I have a forum based website running on a server farm, and there are two moderators trying to do some action which requires writing to multiple tables in the database, and the requests of those moderators are being handled by different servers in the server farm, how can one implement some locking functionality to ensure that they can't take that action on the same item at the same time?
So far, I'm thinking about using a table in the database to sync, e.g. check the id of the item in the table if doesn't exsit insert it and proceed, otherwise return. Also probably a shared cache could be used for this but I'm not using this at the moment.
Any other way?
By the way, I'm using MySQL as my database back-end.
Your question implies data level concurrency control -- in that case, use the RDBMS's concurrency control mechanisms.
That will not help you if later you wish to control application level actions which do not necessarily map one to one to a data entity (e.g. table record access). The general solution there is a reverse-proxy server that understands application level semantics and serializes accordingly if necessary. (That will negatively impact availability.)
It probably wouldn't hurt to read up on CAP theorem, as well!
You may want to investigate a distributed locking service such as Zookeeper. It's a reimplementation of a Google service that provides very high speed distributed resource locking coordination for applications. I don't know how easy it would be to incorporate into a web app, though.
If all the state is in the (central) database then the database transactions should take care of that for you.
See http://en.wikipedia.org/wiki/Transaction_(database)
It may be irrelevant for you because the question is old, but it still may be useful for others so i'll post it anyway.
You can use a "SELECT FOR UPDATE" db query on a locking object, so you actually use the db for achieving the lock mechanism.
if you use ORM, you can also do that. for example, in nhibernate you can do:
session.Lock(Member, LockMode.Upgrade);
Having a table of locks is a OK way to do it is simple and works.
You could also have the code as a Service on a Single Server, more of a SOA approach.
You could also use the the TimeStamp field with Transactions, if the timestamp has changed since you last got the data you can revert the transaction. So if someone gets in first they have priority.

Resources