Using SqlCe as a database and unit tests with Sqlite - sqlite

At the moment I am using SqlCe as a database for my project and doing unit tests with Sqlite because of its simplicity(Such as allow to use Inmemory). I just wonder, in the future this may lead to a dispute or oddity?

If using a different persistence mechanism inside of your unit tests causes you a problem, then the problem would likely be caused by:
Your unit tests
Your general application architecture
The tests that you write for an object should not depend on any way that their dependencies are implemented, doing so automatically means your unit tests turn into integration tests.
You should design your objects using the concept of Persistence Ignorance, which means that they are implemented in such a way so that their implementation does not depend on how the underlying datasource is implemented. A common method for achieving PI inside of enterprise applications is to use the Repository Pattern. This abstracts the interface your objects use to access the datasource from the underlying implementation of the datasource itself. What this means is, in theory, you can create new providers to different datasources without having to change the implementation of your objects that depend on them.
For example:
Let's say that you have an entity called Customer which you save inside of a SqlCe database. You could create an interface called ICustomerRepository that is implemented by a SqlCeCustomerRepository which you use inside of your main application. That way in your unit tests, you could swap it out for a SqlLiteCustomerRepository if you found that to be an easy way to create a mock datasource. To keep things even simpler, you could just create an InMemoryCustomerRepository that used List<T> under the hood to store and retrieve your objects. The point being it doesn't really matter HOW the datasource is implemented, as long as it conforms to the contract you set up on your repository interface.
The benefits of this pattern also reach beyond unit testing, and into general maintenance of your application. Suppose you want to scale up your architecture and use SQL Server instead of SQL CE, using an abstraction such as a repository would help limit the amount of change required in your system in order for this to happen, leading to less development time, less bugs, and happier customers.

Related

Repositories and Entity Framework

I am building a web api for my application and right now i am looking for ways to design my data access layer.
At the end, the application should be able to support a very large number of client and a very large number of queries.
I have heard about entity framework but i have two concerns with it:
I have been told by many that entity framework is not the best when it comes to performance, and performance is something that i can't afford to neglect.
I am only starting to build the application and i'm still looking for developers to join me, if i start with entity framework now, i might want/need to change an orm/library (because of the reason above or any other reason) or even a database technology in the future.
Repositories are a great way to abstract the data access layer and make it invisible to the business layer, so if one day i want to change the DAL/Database technology, i won't have to touch the business, only change the repositories.
Still, i have read a lot about how combining entity framework with the repository pattern is a bad practice.
I am really confused... and i have few questions.
Should i use entity framework? Performances is an important thing to me.
Should i combine it with repository pattern? If not, what do i do when i want to change the database technology/orm?
I have practice with using the repository pattern with native sql client (running native sql queries) but i don't have any practice with using orm's, at least not in .net
Is it really a bad thing for big application to use native sql queries and wrap them with repositories?
It is really important for me to begin writing my application in best way possible (applying all the best practices) so i won't have much struggle in the future.
Thanks,
Arik
Ad.1) Yes, Entity Framework is dead slow - BUT - when used out of the box, if the developers has deep knowledge of Entity Framework, what it does behind the scenes, how to optimize the queries - it can be as fast as your more low-lewel own implementation of data access.
Ad.2) If you want to change the ORM or the Database technology - that is not a matter whetever you use EF or not, it's a matter of the architecture you will design for the software.
Ad.1) see former Add.1, if performance is really important, I personally would go with low-level SqlDataReader, altough as I sad, it's possible to use EF in a performant way.
Ad.2) I don't see nothing bad in combining the repository pattern with EF, in small applications it may be an overhead, because the EF is basically an implementation of an repository pattern, so you would get a "double repository pattern", but it allows you to abstract away the coupling with EF
Ad.3) I don't think it's a bad way - but it depends obviously on the application.
I think that using a repository pattern is a good idea and a sort of wayout if you have some performance issues.
About Dapper the question is why Dapper is more performant than EF and NHibernate. No Lazy Load, no DML, easy mapper and so on. If you want DML (I do) and sometimes Lazy Load you could have a mixed approach. Repository Pattern + EF + Dapper.
My approach actually is Repository Pattern + EF + very few query (massive update and delete and few select - EF writes huge SQL statements also for simple queries - ). To map the select you can include Dapper (I don't), do it by hand (manually mapping or use the features inside EF - but there are some limitations - or write something generic). Usually I map it manually but I wrote also a mapper based on EF Mapping
Entity framework Code First - configure mapping for SqlQuery
I used it for few times and actually I don't use it anymore.

Usage of Dependency Injection other than writing unit test friendly programs

What is the any use of Dependency Injectors except writing unit test friendly programs?
I have used it in several projects and I like this approach. However I was wondering what is the real use of this pattern? Give me just one use but with proper explanation and code if possible.
Plenty of information if you Google it. From Wikipedia:
Advantages
Because dependency injection doesn't require any change in code behavior it can be applied to legacy code as a refactoring. The result is more independent clients that are easier to unit test in isolation using stubs or mock objects that simulate other objects not
under test. This ease of testing is often the first benefit noticed when using dependency injection.
Dependency injection allows a client to remove all knowledge of a concrete implementation that it needs to use. This helps isolate the client from the impact of design changes and defects. It promotes reusability, testability and maintainability.
Dependency injection can be used to externalize a system's configuration details into configuration files allowing the system to be reconfigured without recompilation. Separate configurations can be written for different situations that require different implementations of components. This includes, but is not limited to, testing.
Reduction of boilerplate code in the application objects since all work to initialize or set up dependencies is handled by a provider component.
Dependency injection allows concurrent or independent development. Two developers can independently develop classes that use each other, while only needing to know the interface the classes will communicate through. Plugins are often developed by third party shops that never even talk to the developers who created the product that uses the plugins.

Sharing stored procedures across multiple apps

Team A has an enterprise app that uses ADO.NET for data access that executes stored procedures. The data access is encapsulated in it's own project (let's call it DAL.dll)
Team B is creating another unrelated app that's reusing the stored procedures in the enterprise app. This app is currently using the MS application block for data access. The issue we run into is that whenever Team A make any change to the input/output params in the stored procedures, there is a runtime error in Team B's app and this app needs to be updated to accommodate the additional params (or params that were removed). So, most of these go unnoticed until a user complains. At the very least, we would like to have the app throw a compilation error so that the build process warns us of the changes made.
One way to do this is to have Team B's project add a reference to the DAL.dll
I'd like to know if there are any other cleaner ways of solving the issue. We are ready to replace Team B's MS Data application block to use a different technology (Entity Framework?) if necessary.
Among the other answers, I'd strongly suggest getting those stored procedures into source control, in a Database Project. You then may be able to use the features of your source control system to do several things:
Lock some of the code so that it cannot be changed
Give you notifications if the code is changed
Warn you if the stored procedures change in a way that would prevent them from being called
Branch the stored procedures so that each team can have their own version of changed code, while keeping the unchanged stored procedures common. You of course will need to separate the different versions in the database.
I agree with the other posters on this thread that you should not share stored procedure's across different .NET DLL's, that is just a recipe for disaster. I would also shy away from ORM's like Entity Framework if you are doing anything at all complicated with your database schema because ORM's excel at getting a simple object model translated from your .NET application classes into SQL tables and SP's, but traditionally do poorly at optimizing them for performance on the database side. There will be people who claim otherwise, and they may have a valid point if you are an expert in wrangling an ORM to do waht you want like they are, but chances are you are not and it will cause you headaches in the long run.
A shared data access layer might work, but conceptually you are then just changing the implementation of the dependency from some code that a DBA wrote to some code that a .NET programmer wrote. Yes, you can use integration tests to achieve better verifiability, but the same case could be made for SQL with tools like Red Gate's SQL Test. I would shy away from this approach if the two applications are already experiencing some sort of pain from sharing SP's. That is an indication that the dependency just should be done away with.
If it were up to me, I'd just make a new schema for Team B's app. You can read more about schemas in SQL Server here: MSDN Schema description for 2008 R2. You can think of them as namespaces for SQL Server but with some additional bells and whistles like permission and access control. Separating out your different applications into separate schemas on the same shared database will probably make for the most flexible implementation in the long run.
unrelated app that's reusing the stored procedures in the enterprise app
If these two application are really unrelated why are those sharing procedures or even the same database. I know this is a long read, but I recommend you to read this: A Better Path to Enterprise Architectures
The partioning concept in there relates to the bounded context in Domain driven design:
Multiple models are in play on any large project. Yet when code based on distinct models is combined, software becomes buggy, unreliable, and difficult to understand. Communication among team members becomes confusing. It is often unclear in what context a model should not be applied.
Therefore: Explicitly define the context within which a model applies. Explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas. Keep the model strictly consistent within these bounds, but don’t be distracted or confused by issues outside.
It is expected you end with problems when you don't explicitely deal with this. You're lucky you're seeing early failures, as it can turn into problems much harder to find on the long run.
Analyze the problem again with the above in mind. Consider if you're missing some explicit context where this common functionality should live.
My question is: which team owns the store procedured and the database shared? Usually as a good architecture/design, you should not have two different apps sharing same database / procedures.
A better way to share data/functionality between two different applications is through a services or API, so the team who owns the functionality would be responsible to maintain it.
Also, have a good communication between both teams is highly recommend.
Depending on the owner of the DAL project, you could host web services and share the API. That way, you separate the Data Access Layer from the business logic, which allows anyone to use the same DAL without having to publish it to each different location.
From my point of view, it looks like both Team A and Team B should share the same core model and look at Multitier architecture as a possible solution.
It sounds like it would make sense to create a shared DAL that both applications can share.
I would add unit tests (or really integration tests) to make sure the DAL is compatible with the apps after changes. That way your tests would fail if incompatible changes have been made
"I'd like to know if there are any other cleaner ways of solving the issue."
The cleanest way is for Team B to sit down with Team A and encapsulate the relevant business logic into a shared API. It doesn't matter so much how you implement that API; what does matter is that the API's interface is documented and versioned so everyone knows what to expect.
One reasonable mechanism for this in a .NET environment is to use Microsoft's WebAPI.
In short, the question of "how do we share a stored procedure?" is most likely looking at the wrong level of abstraction.

Entity Framework Best Practices in ASP.Net

I have just started working on entity framework in an ASP.net application and I was wondering if someone could point in the right direction with respect to best practices. I have some questions in particular which I have listed below.
First of all I am using entity framework 4.0. I already have my database created and so I created a class library and generated the entity model from the database. I had been using Guids generated by the database so I had to modify the ssl to include the attribute StoreGeneratedPattern="Identity". Is there a way to do this automatically or do I have to manually edit the ssl everytime I update the database and the model? (For those of you know are facing a problem with guids or want to know why I am doing this.. this is a clear article on the problem with auto generated GUIDs)
I was planning on using a single file in the class library to hold all the database queries. Is this good practice? How would I ensure different programmers dont rewrite the same queries over and over?
I was planning on using a unique context per method. Is this the right way to go? I read through Rick Strahl's post on context lifetime management. But I am still not sure if a unique context per method is the right way to go.
Can I have my database queries as static methods since they do not make use of any instance variables?
If I use a unique context per method as mentioned in 3 and I wish to modify an entity object returned by one context what would be the best practice? Do I use the attach functionality to attach the object to a new context and save the changes ? I havent tried this but I have read a couple of articles and it seems a bit straightforward but wanted to know if there are any alternatives to this.
If you any suggestions on how you use Entity Framework in an ASP.net application I definitely could use help. This is my first ASP.net/Entity framework application so any tips will help
This was issue in initial version of VS 2010. In some cases this should already work correctly once you have VS 2010 SP1 installed. If it doesn't install this KB.
You can easily get huge class with a lot of static methods. Try to use some separation by the entity type you are querying. You will never fully ensure that another programmer will not create the same query again. This is about correct query naming following same naming policy, documentation and communication among programmers.
Unique context "per method" is usually not needed. In most cases you should be happy with unique context per logical (business) transaction - in case of web application logical operation is in most cases single request processing = one context per request.
If you pass context instance to your queries the answer is yes. Once you don't create them as static and they will take context instance from their class instance you will be very close to repository pattern.
This is exactly problem with context per method and it is hard to solve because to make this work you must first detach entity from the first context and attach it to the second context. If your entity has also related entities loaded all these relations will be nulled during detaching (unless you use deep clone instead of detaching = creating second instance of the entity).

Best Small & Mid-Size Application Arcitecture

I am Developing a mid-size application and want to implement Application Architecture, I've read some Architecture Books and Approach and think about
AAFN (Application Arcitecture For .net) presented by Microsoft
SOA
SDLM
SDO
MVC
and vice versa ...
this is a web application that will extended with some other small application ( just think about something like a M.I.S with a (or two) core)
Whitch Projects I should have I think about
Common // to use in all projects
Framework // main framework
DAO // data access object ( entityframework or nHibernate )
UI // will available in 2 variant web and windows(wpf) interface )
BusinessEntities // all subApplication project logic will goes there
ApplicationNameProject // each application have their Own Logic (in BussinessEntities)
ApplicationUnit // each application Entity will place here
ApplicationNameProject // each application data Entity (in Application Unit)
Services // WCF Services goes here to contribute with all applications
this is the architecture witch I think about, I do not have any force to use this, I want to know whats the best fit for me, can Change all of it or add some other projects and remove these projects
any help appriciated
There is no "best small or mid-size application architecture" as a silver bullet to fit any project, so drop that idea right now or you'll be in for a world of pain down the road.
The architecture for any given project will fit the purpose of that project. In some cases, ASP.NET WebForms with a direct queries into the database will be the most appropriate architecture, in some cases MVC will be the right architecture, in some cases a windows forms application built on top of a web service that connects to a relational database through an ORM like LINQ-to-SQL or NHibernate.
You can't decide on a one-architecture-fits-all approach, it just doesn't work. Each architecture has its merits and weaknesses and thus projects for which it is well suited and projects for which it should be avoided. You should pick the approach that makes the most sense for the current project/scenario.
Given that however, I tend to take a fairly uniform approach.
If I need a quick utility project that does a very specific thing and is highly unlikely to be needed for anything else, I might use a console application with queries against my database hardcoded.
If I need a common set of queries that I'm likely to need from multiple projects, I'll write them as stored procedures to get the performance benefits and build a data access layer that will leverage these stored procedures to give me standardized business objects, in a standard DAL (data access layer)/BOL (business object layer)/BLL (business logic layer) approach. This is advantageous because it means that once I've got this set of libraries built I can float any application over the top - for instance a webforms or MVC application.
MVC is advantageous because of separation of concerns - your controller can interact with your business library simply to access the data it needs and your views are really just that - a view of the data that the user can interact with. The views do nothing more than take the current data view to the user and transport any data changes back from the user to the controller - no logic is held in the view and as such it means that it's far easier to unit test and make changes to components without affecting the rest of the application.
The drawback to a multi-tiered or multi-layered approach like this though is that it takes time to architect it properly and if you're only after a throw-away utility application like they demonstrate on stage at developer conferences then this is complete overkill and I wouldn't bother with it.
Think of it like this: Every layer, every library, every component requires justification. If there is less justification for than against, then don't do it. The key is not to do something without reason - anything you do is correct providing that you have a well thought out reason for it, and by well thought out, I mean that you've found very good reasons for and against and you've made an educated decision, you've not made a decision based on half thoughts, or worse, no thought at all.
Anything but the most trivial .NET application should have several projects: a UI layer, some kind of business logic layer, a persistence (storage) layer and accompanying test projects. Each project should interact loosely through interfaces.
In general you should create the minimum number of layers you need to make your code testable and easy to understand.
To figure out what the minimum is that you need it can be a good idea to let your tests drive the internal design of the system. Each layer should have tests in its own right, with (possibly) the exception of the top HTML layer and the bottom SQL layer.
With that in mind it helps to separate concerns as far as possible. For example SQL queries should almost never be in the same block of code as HTML support: split things into multiple layers that each do one and only one thing. This makes changes easier.
Be aware of the difference between systems architecture (where loosely coupled Web services using e.g. REST interact) and the internal design of the system. It's a good idea to decouple the Web service interfaces (as consumer or provider) in their own layers as this is an area that often changes.
These designs are an art that's best learned by practice. With good unit tests you should find refactoring an application design fairly swift, so it's a good idea to look at technologies like Spring.NET or other inversion of control containers to make this easy.

Resources