Explanation of POCO - poco

I'm wondering if anyone can give a solid explanation (with example) of POCO (Plain Old CLR Object). I found a brief explanation on Wikipedia but it really doesn't give a solid explanation.

Instead of calling them POCOs, I prefer to call them persistence ignorant objects.
Because their job is simple, they don't need to care about what they are being used for or how they are being used.
Personally I think POCOs are just another buzzword (like Web 2.0 - don't get me started on that) for a public class with simple properties.
I've always been using these type of objects to hold onto business state.
The main benefits of POCOs are really seen when you start to use things like the repository pattern, ORMs and dependency injection.
In other words - you could create an ORM (let's say EF) which pulls back data from somewhere (db, web service, etc), then project this data into objects (POCOs).
These objects can be passed further down the app stack to the service layer, then onto the web tier.
Then if one day you decide to switch over to nHibernate, you should not have to touch your POCOs at all, the only thing that should need to be changed is the ORM.
Hence the term 'persistence ignorant' - they don't care what they're being used for or how they are being used.
So to sum up, the pros:
Allows a simple storage mechanism for data, simplifies serialization/passing around through layers
Goes hand in hand with depedency injection, repository pattern and ORMs. Flexibility.
Minimized complexity and dependencies on other layers. (higher layers only care about the POCOs, POCOs don't care about anything). Loose coupling
Simple testability (no stubbing required for domain testing).
Hope that helps.

You need to give more details, such as the context in which you are planning to use POCO. But the basic idea is that you will create simple objects containing only the data/code that is necessary. These objects would not contain any "baggage" such as annotations, extra methods, base classes, etc that might otherwise be required by (for example) a framework.

Example of a POCO:
class Person {
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
}

Related

C#/ASP.NET MVC 4 Instantiate Object Derived From Interface In Factory Method

Currently have a Factory class that features a GetSelector function, which returns a concrete implementation of ISelector. I have several different classes that implement ISelector and based on a setting I would like to receive the appropriate ISelector back.
public interface ISelector
{
string GetValue(string Params);
}
public class XmlSelector : ISelector
{
public string GetValue(string Params)
{
// open XML file and get value
}
}
public static class SelectorFactory
{
public static ISelector GetSelector()
{
return new XmlSelector(); // Needs changing to look at settings
}
}
My question is what is the best way to store the setting? I am aware of using AppSettings etc. but I'm not sure whether I want to have to store strings in the web.config and perform a switch on it - just seems to be really tightly coupled in that if a new implementation of ISelector is made, then the Factory would need to be changed. Is there any way of perhaps storing an assembly name and instantiating based on that?
Thanks,
Chris
It is hard to say, because I don't know the architecture of your particular project, but at a first glance what I would do is if the objects associated with ISelector can be decoupled from your web application, I would put these objects in a class library along with the factory. Your factory will need to be changed if you implement a new ISelector, but if you can decouple the whole ISelector family from your actual web application the depth of the refactoring you will have to do will be minimal compared to a monolithic architecture.
Personally, I tend to avoid AppSettings, web.config settings and the like for mission-critical design questions. Using the web.config as an example, I have seen applications where architectural data is stored for ease of configurability. The problem is that after compilation your web.config can be changed (that is the purpose of it after all) and if the implementation of your classes depends on very specific values being chosen, you are running a risk of a crash when someone inadvertently modifies the wrong value.
Like I said all this depends entirely on your application architecture, but my reflex would be to split out the components that could be subject to future modification into a class library. Loose coupling is your friend ;).
Instead of doing it in AppSettings, I think a better approach will be to create a separate XML file, which will only hold the mappings and from that file you can iterate through the mappings and return correct instance in GetSelector().

Usage of repository between EF model and code consumer

I have binary data in my database that I'll have to convert to bitmap at some point. I was thinking whether or not it's appropriate to use a repository and do it there. My consumer, which is a presentation layer, will use this repository. For example:
// This is a class I created for modeling the item as is.
public class RealItem
{
public string Name { get; set; }
public Bitmap Image { get; set; }
}
public abstract class BaseRepository
{
//using Unity (http://unity.codeplex.com) to inject the dependancy of entity context.
[Dependency]
public Context { get; set; }
}
public calss ItemRepository : BaseRepository
{
public List<Items> Select()
{
IEnumerable<Items> items = from item in Context.Items select item;
List<RealItem> lst = new List<RealItem>();
foreach(itm in items)
{
MemoryStream stream = new MemoryStream(itm.Image);
Bitmap image = (Bitmap)Image.FromStream(stream);
RealItem ritem = new RealItem{ Name=item.Name, Image=image };
lst.Add(ritem);
}
return lst;
}
}
Is this a correct way to use the repository pattern? I'm learning this pattern and I've seen a lot of examples online that are using a repository but when I looked at their source code... for example:
public IQueryable<object> Select
{
return from q in base.Context.MyItems select q;
}
as you can see almost no behavior is added to the system by their approach except for hidding the data access query, so I was confused that maybe repository is something else and I got it all wrong. At the end there should be extra benifits of using them right?
Update: as it turned out you don't need repositories if there is nothing more to be done on data before sending them out, but wait! no abstraction on LINQ query? that way client has to provide the query statements for us which can be a little unsafe and hard to validate, so maybe the repository is also providing an abstraction on data queries? if this is true then having a repository is always an essential need in project architecture!! however this abstraction can be provided by using SQL stored procedures. what is the choice if both options are available?
Yes, that's the correct way: the repository contract serves the application needs, dealing ony with application objects.
The (bad)example you are seeing most of the time couples any repository implementation to IQueryable which may or may be not implemented by the underlying orm and after all it is an implementation detail.
The difference between IQueryable and IEnumerable is important when dealing with remote data, but that's what the repository does in the first place: it hides the fact you're dealing with a storage which can be remote. For the app, the repository is just a local collection of objects.
Update
The repository abstracts the persistence access, it makes the application decoupled from a particular persistence implementation and masks itself as a simple collection. This means the app doesn't know about Linq2Sql, Sql or the type of RDBMS used, if any. The app sends/receives objects from the repo, while the repo actually persists or loads objects. The app doesn't care how the repo does it.
I consider the repository a very useful pattern and I'm using it in every project, precisely because it marks the boundry between the application (as the place where problems and solutions are defined and handled) and storage/persistence where data is saved.
You can make you repository a generic one and can get mode value out of it. And make sure you are using an Interface (IItemRepository ) to access repositories in manager layer so that the you can replace your repositories with some another data access method using new repository implementation. Here is an good example how to do this.

Does anyone know a good practice to use when developing for the system.directory services class?

I'm trying to create a data access later using System.DirectoryServices. I'd like to use the MVC 2 framework and have all my views be mostly strongly-typed. Does anyone know any good way to this?
For example I started creating a Group Entity:
public class Group
{
public string DistinguishedName { get; set; }
public string GroupName { get; set; }
}
And an abstract interface:
public interface IGroupRepository
{
List<Group> Groups { get; }
}
I am confused about developing the GroupRepository using the system.directory services. Connecting to a SQL database is easy there are examples everywhere but I have no been able to find any using the System.directory sevices in conjunction with a class using MVC. Has anyone tried to do something like this? Any great would be
If you're on .NET 3.5 (and if you use MVC 2, chances are good you are), you should check out the new System.DirectoryServices.AccountManagement namespace which brings you lots of strong .NET classes and types for many of the directory objects you're dealing with on a regular basis - no need to re-invent the wheel (yet again!).
Check out this great article in MSDN magazine on how to use this S.DS.AM namespace:
Managing Directory Security Principals in the .NET Framework 3.5
Update: for reasons I don't totally understand, the simple approach of using a UserPrincipal as a model for a ASP.NET MVC view doesn't work - it seems as if ASP.NET MVC cannot "find" any properties on that object.
So the approach would have to be to do something like this:
grab your UserPrincipal (or DirectoryEntry) from Active Directory
define a separate ViewModel - this is just a class that holds properties, like first name, last name and so forth
you can either fill that ViewModel class yourself, or you can grab some help like AutoMapper to make mapping from UserPrincipal (DirectoryEntry) to your ViewModel easier
then display (or edit) your ViewModel class in a standard ASP.NET MVC view
handle any possible updates by transferring any changes back from the ViewModel to the "proper" object and persisting that object
It's a bit more involved than I'd like it to be - but I quite honestly don't see how else you can do this otherwise.

Thoughts on writing a "flexible" API?

I may have the wrong "pattern" here, but I think it's a fair topic.
I have an ASP.Net MVC application in which it calls out to a WCF service to get back the ViewModels that will be rendered. (the reason it's using a WCF service is so that other small MVC apps may also call on for these ViewModels...only internally, it's not a publicly available thing so I can change anything either side of the service. The idea is to move the logic that was in the website, closer to the server/database so the roundtrips aren't so costly - and only do one roundtrip overall from the webserver to the database server).
I'm trying to work out the best thing to return these "ViewModels" in from the service. There are lots of common little bits of functionality, but each page may want to display different subsets of these things (so homepage maybe a list of tables, next page, a list of tables and users that are available).
So what's the best way of returning the information that the page wants, hopefully without the webservice knowing about the page?
Edit:
It's been suggested below that I move the logic in process. This would be a lot faster, except that's what we're moving away from because it is actually a lot slower (in this case). The reason for this is that the database is on one server, and the webapp is on another server, and the webapp is particularly chatty at points (there are pages it could end up doing 2K round trips - (I have no control over reducing this number before that's suggested)), so moving the logic closer to the db is the next best way of making it more performant.
I would look at creating a ViewModel per each MVC app/view. The service could just return the maximum amount of data for the "view" in a logical sense and each MVC app uses the information it wants when composing the ViewModel for it's view.
Your service is then only responsible for one thing, returning data specific to a view's function. The controller of each app is responsible for using/not using pieces of the returned data.
This will be more flexible as your ViewModels may require different validation rules as well. ViewModels also have MVC-specific needs(SelectList etc..) that shouldn't really be returned by a service layer. It seems like something can be shared at a glance, but there are generally lots of small differences that make sharing ViewModels a bad idea.
class MyServiceViewResult
{
public int SomethingEveryViewNeeds { get; set; }
public bool OnlyOneViewMightNeedThis { get; set; }
}
class ViewModel1
{
public int IdProperty { get; set; }
public ViewModel1(MyServiceViewResult result)
{
IdProperty = result.SomethingEveryViewNeeds;
}
}
class ViewModel2
{
public int IdProperty { get; set; }
public bool IsAllowed { get; set; }
public ViewModel2(MyServiceViewResult result)
{
IdProperty = result.SomethingEveryViewNeeds;
IsAllowed = result.OnlyOneViewMightNeedThis;
}
}
Instead of having a web service, why don't you just implement the service as a reusable library that encapsulates the desired functionality?
This will also allow you to use polymorphism to implement customizations. WCF doesn't support polymorphism in a flexible way...
Using an in-proc service will also be a lot faster.
See this related question for outlines of a polymorphic solution: Is this a typical use case for IOC?

How to best create a test DB when doing TDD?

what's the best practice for creating test persistence layers when doing an ASP.NET site (eg. ASP.NET MVC site)?
Many examples I've seen use Moq (or another mocking framework) in the unit test project, but I want to, like .. moq out my persistence layer so that my website shows data and stuff, but it's not coming from a database. I want to do that last. All the mocking stuff I've seen only exists in unit tests.
What practices do people do when they want to (stub?) fake out a persistence layer for quick and fast development? I use Dependency Injection to handle it and have some hard coded results for my persistence layer (which is really manual and boring).
What are other people doing? Examples and links would be awesome :)
UPDATE
Just a little update: so far I'm getting a fair bit of mileage out of having a fake repository and a SQL repository - where each class implements an interface. Then, using DI (I'm using StructureMap), I can switch between my fake repository or the SQL repository. So far, it's working well :)
(also scary to think that I asked this question nearly 11 months ago, from when I'm editing this, right now!)
Assuming you're using the Repository pattern from Rob Conery's MVC Store Front:
http://blog.wekeroad.com/mvc-storefront/mvc-storefront-part-1/
I followed Rob Conery's tutorial but ran into the same want as you. Best thing to do is move the Mock Repositories you've created into a seperate project called Mocks then you can swap them out pretty easily with the real ones when you instantiate your service. If your feeling adventurous you could create a factory that takes a value from the config file to instantiate either a mock or a real repository,
e.g.
public static ICatalogRepository GetCatalogRepository(bool useMock)
{
if(useMock)
return new FakeCatalogRepository();
else
return new SqlCatalogRepository();
}
or use a dependency injection framework :)
container.Resolve<ICatalogRepository>();
Good luck!
EDIT: In response to your comments, sounds like you want to use a list and LINQ to emulate a db's operations e.g. GetProducts, StoreProduct. I've done this before. Here's an example:
public class Product
{
public int Identity { get; set; }
public string Name { get; set; }
public string Description { get; set; }
//etc
}
public class FakeCatalogRepository()
{
private List<Product> _fakes;
public FakeCatalogCatalogRepository()
{
_fakes = new List<Product>();
//Set up some initial fake data
for(int i=0; i < 5; i++)
{
Product p = new Product
{
Identity = i,
Name = "product"+i,
Description = "description of product"+i
};
_fakes.Add(p);
}
}
public void StoreProduct(Product p)
{
//Emulate insert/update functionality
_fakes.Add(p);
}
public Product GetProductByIdentity(int id)
{
//emulate "SELECT * FROM products WHERE id = 1234
var aProduct = (from p in _fakes.AsQueryable()
where p.Identity = id
select p).SingleOrDefault();
return aProduct;
}
}
Does that make a bit more sense?
Boring or not, I think you're on the right track. I assume you're creating a fakeRepository that is a concrete implementation of your IRepository which in turn is injected into your service layer. This is nice because at some point in the future when you're happy with the shape of your entities and the behavior of your services, controllers, and views, you can then test drive your real Repositories that will use the database to persist those entities. Of course the nature of those tests will be integration tests, but just as important if not more so.
One thing that may be less boring for you when the time comes to create your real repositories is if you use nHibernate for your persistence you will be able let nhibernate generate your database after you create the nhibernate maps for your entities, assuming you don't have to use a legacy schema.
For instance, I have the following method that is called by my SetUpFixture to generate my db schema:
public class SchemaBuilder
{
public static void ExportSchema()
{
Configuration configuration = new Configuration();
configuration.Configure();
new SchemaExport(configuration).Create(true, true);
}
}
and my SetUpFixture is as follows:
[SetUpFixture]
public class SetUpFixture
{
[SetUp]
public void SetUp()
{
SchemaBuilder.ExportSchema();
DataLoader.LoadData();
}
}
where DataLoader is responsible for creating all of my seed data and test data using the real respoitory.
This probably doesn't answer your questions but I hope it serves to reassure you in your approach.
Greg
Although I'm not using Asp.Net or the MVC framework I do have the need to test services without hitting the database. Your question triggered the writing up of a short (ok, maybe not so short) summary of how I do it. Not claiming it's the best or anything, but it works for us. We access data through a repository and when required we plug in an in-memory repository as explained in the post.
http://blogs.microsoft.co.il/blogs/kim/archive/2008/11/14/testable-data-access-with-the-repository-pattern.aspx
I am using a complete in memory database with SQLite and ActiveRecord. Basically we delete and re-create the database before every integration test is being run, so that the data is always in a known state. The contents of the database are inserted through code. So an example would be like this:
ActiveRecord.Initalize(lots of parameters)
ActiveRecord.DropSchema();
ActiveRecord.CreateSchema();
and then we just add lots of customers or whatever, DDD style:
customerRepository.Save(customer);
Another way to solve this could be using NDbUnit to maintain the state of the database.
I know this question is a bit old, but I've finally come up with an answer :)
Firstly, use RavenDb (Embedded). It's part of the RavenDb Document Database. Its a fully in memory database and works perfectly with unit tests :) I've done it with MSTest, NUnit and xUnit.
Secondly, you can use NHibernate with SqlLite if you don't want to use RavenDb. Ayende has a post about using this.
I've gone the route of creating tables and data during a setup method in a unit test class, running tests, then doing clean up during the teardown. Yes, this method works, but if you really end up using your unit tests for debugging purposes, invariably you will run the setup, debug something then stop in the middle without doing the teardown. It's very brittle and you will probably end up (in the long run) with bad data in your test database and/or unusable unit tests. I personally think its best to mock the database layer using a mocking framework. I do understand that sometimes it's best to do logic in the database. For these cases you can use a tool like DBFit to write tests for your database layer.

Resources