Is static caching DatabaseFactory.CreateDatabase acceptable? - asp.net

Is it acceptable to cache an instance of the database connection on application start?
Looking at the MSDN documentation on thread safety, I quote:
Any public static [...] members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
Given that, is it acceptable/safe for code such as this example below:
public static class BookingMapper
{
public static Database Db { get; set; }
static BookingMapper()
{
Db = DatabaseFactory.CreateDatabase();
}
public static string GetBooking(int id)
{
using (DbCommand cmd = Db.GetStoredProcCommand("getBooking"))
{
Db.AddInParameter(cmd, "#Id", DbType.Int32, id);
using (IDataReader dr = Db.ExecuteReader(cmd))
{
...
}
}
}
}
If it is acceptable, what are the benefits/drawbacks of using such an approach over simply instantiating the Database on every single method call?
Thanks in advance.
Update:
Further research has pointed me to a PrimaryObjects.com article which, in the Putting the Database Factory to Use section suggests that this is acceptable. But I'm still wondering if there are pros/cons to doing it this way?
Similar question

1) There are two ways to interpret that standard phrase on Thread Safety from MSDN, and I wish they'd clarify it. Your interpretation would be nice, but I believe that what it means is that:
Any members (methods, fields, properties, etc) that are part of this type, and that are public and static, are thread safe
(e.g. there are two ways to interpret the subphrase "members of this type")
2) Generally, you don't want to share a db connection around - you want to open a connection, do your job, and close it. You can't generally have multiple open readers associated with a single connection (this is generic db/connection advice, not ent library specific).
3) On some further reading inside the ent library, the Database object returned by the CreateDatabase call isn't a connection itself, and it looks like the connection management is handled as I stated in point 2. So it looks like the Database object itself can be safely shared around.

Related

Custom Binding Required for SpringMVC Form Field

I ran into the following SpringMVC issue: there is a domain object which uses a certain Address sub-object, but the getters/setters have to be tweaked to use a different Address object via conversion. This is an architectural requirement.
public class DomainObj {
protected DomainObj.Address address;
public anotherpackage.new.Address getAddress()
{
return convertFrom(address);
}
public void setAddress (anotherpackage.new.Address value)
{
this.address = convertTo(value);
}
}
// Internal Address object, old, #1
public static class Address {
protected String street1;
protected String street2;
// etc., getters/setters
}
Now, in the JSP, I bind an Input Text Field to the new Address object (the result of conversions) that's what we have to deal with. In this new 2nd Address object (anotherpackage.new.Address), there is a field e.g. "addressLine1", which is different from the old object's "Street1":
<form:input path="topObject.address.addressLine1" />
My problem is that the setter, setAddress(), never gets called in this case for binding (verified in the Debugger). Any solutions to this?
Your options are:
a) do not bind directly to the business object
b) configure a binder to do the conversion to your domain object
Discussion:
Usually in enterprise class software we don't want to bind directly to the business objects -- which are usually entities (in the context of jpa). This is because session handling is a bee-otch. Usually we code against DTOs, and when one is received from the front-end we read the appropriate object from the repository (ORM) layer, update it, and save it back again (I've only described updates because they're the hardest, but a similar model works for everything).
However, spring mvc binders offer a way of binding anything to anything. They're a bit complicated and it'll take too long to explain here, but the docs are in the spring documentation and you want to be looing at converters and a conversion service. There are SO Q/A's on this topic, for example...

Best practice to implement sending message to custom groups in SignalR

I am developing a real time multiplayer game using SignalR. The message delivery logic is not very simple which I could not handle it by using Groups.
For example, one message will be delivered to users with some custom property equals to some dynamic value. It means the target audiance can not be handled by
Clients.All
Clients.AllExcept
I have a mapping class something like this:
public class Player
{
public dynamic Client { get; set; }
public string ConnectionId { get; set; }
public string Name { get; set; }
}
Somehow I do detect all my audiences in a List object.
What is the best way to send message to everyone in the list? Enumerating through list and calling
foreach (var loopPlayer in players)
{
player.Client.sendMessage(message);
}
Or
List<string> ids = new List<string>();
foreach (var loopPlayer in players)
{
ids.Add(item.ConnectionId.ToString());
}
Clients.Clients(ids).sendMessage(message);
My concern about the first one is, it will serialize the message every time. About the second one, I don't know how it is working behind the scene.
The both approach is working but I am concerning about performance and trying to find the best practice. Or which any other approach I might use?
As you said, enumerating over list of clients serialize message each and every time (and store those serialized messages in internal buffers etc etc). If message is same, this is unnecessary CPU\memory overhead.
Clients.Clients(ids) serializes message only once so performance vise, its definitely way to go.
The message delivery logic is not very simple which I could not handle it by using Groups. For example, one message will be delivered to users with some custom property equals to some dynamic value.
Groups work in scaleout scenario out of the box which is huge benefit if you ever find yourself in need to scaleout. So maybe try to find some way to simplify "group assigment logic" even at cost of delivering some messages to more clients and doing "filtering" client side...

Usage of repository between EF model and code consumer

I have binary data in my database that I'll have to convert to bitmap at some point. I was thinking whether or not it's appropriate to use a repository and do it there. My consumer, which is a presentation layer, will use this repository. For example:
// This is a class I created for modeling the item as is.
public class RealItem
{
public string Name { get; set; }
public Bitmap Image { get; set; }
}
public abstract class BaseRepository
{
//using Unity (http://unity.codeplex.com) to inject the dependancy of entity context.
[Dependency]
public Context { get; set; }
}
public calss ItemRepository : BaseRepository
{
public List<Items> Select()
{
IEnumerable<Items> items = from item in Context.Items select item;
List<RealItem> lst = new List<RealItem>();
foreach(itm in items)
{
MemoryStream stream = new MemoryStream(itm.Image);
Bitmap image = (Bitmap)Image.FromStream(stream);
RealItem ritem = new RealItem{ Name=item.Name, Image=image };
lst.Add(ritem);
}
return lst;
}
}
Is this a correct way to use the repository pattern? I'm learning this pattern and I've seen a lot of examples online that are using a repository but when I looked at their source code... for example:
public IQueryable<object> Select
{
return from q in base.Context.MyItems select q;
}
as you can see almost no behavior is added to the system by their approach except for hidding the data access query, so I was confused that maybe repository is something else and I got it all wrong. At the end there should be extra benifits of using them right?
Update: as it turned out you don't need repositories if there is nothing more to be done on data before sending them out, but wait! no abstraction on LINQ query? that way client has to provide the query statements for us which can be a little unsafe and hard to validate, so maybe the repository is also providing an abstraction on data queries? if this is true then having a repository is always an essential need in project architecture!! however this abstraction can be provided by using SQL stored procedures. what is the choice if both options are available?
Yes, that's the correct way: the repository contract serves the application needs, dealing ony with application objects.
The (bad)example you are seeing most of the time couples any repository implementation to IQueryable which may or may be not implemented by the underlying orm and after all it is an implementation detail.
The difference between IQueryable and IEnumerable is important when dealing with remote data, but that's what the repository does in the first place: it hides the fact you're dealing with a storage which can be remote. For the app, the repository is just a local collection of objects.
Update
The repository abstracts the persistence access, it makes the application decoupled from a particular persistence implementation and masks itself as a simple collection. This means the app doesn't know about Linq2Sql, Sql or the type of RDBMS used, if any. The app sends/receives objects from the repo, while the repo actually persists or loads objects. The app doesn't care how the repo does it.
I consider the repository a very useful pattern and I'm using it in every project, precisely because it marks the boundry between the application (as the place where problems and solutions are defined and handled) and storage/persistence where data is saved.
You can make you repository a generic one and can get mode value out of it. And make sure you are using an Interface (IItemRepository ) to access repositories in manager layer so that the you can replace your repositories with some another data access method using new repository implementation. Here is an good example how to do this.

Entity Framework ObjectContext re-usage

I'm learning EF now and have a question regarding the ObjectContext:
Should I create instance of ObjectContext for every query (function) when I access the database?
Or it's better to create it once (singleton) and reuse it?
Before EF I was using enterprise library data access block and created instance of dataacess for DataAccess function...
I think the most common way is to use it per request. Create it at the beginning, do what you need (most of the time these are operation that require common ObjectContext), dispose at the end. Most of DI frameworks support this scenario, but you can also use HttpModule to create context and place it in HttpContext.Current.Items. That is simple example:
public class MyEntitiesHttpModule : IHttpModule
{
public void Init(HttpApplication application)
{
application.BeginRequest += ApplicationBeginRequest;
application.EndRequest += ApplicationEndRequest;
}
private void ApplicationEndRequest(object sender, EventArgs e)
{
if (HttpContext.Current.Items[#"MyEntities"] != null)
((MyEntities)HttpContext.Current.Items[#"MyEntities"]).Dispose();
}
private static void ApplicationBeginRequest(Object source, EventArgs e)
{
var context = new MyEntities();
HttpContext.Current.Items[#"MyEntities"] = context;
}
}
Definitely for every query. It's a lightweight object so there's not much cost incurred creating one each time you need it.
Besides, the longer you keep an ObjectContext alive, the more cached objects it will contain as you run queries against it. This may cause memory problems. Therefore, having the ObjectContext as a singleton is a particularly bad idea. As your application is being used you load more and more entities in the singleton ObjectContext until finally you have the entire database in memory (unless you detach entities when you no longer need them).
And then there's a maintainability issue. One day you try to track down a bug but can't figure out where the data was loaded that caused it.
Don't use a singleton.. everyone using your app will share that and all sorts of crazy things will happen when that object context is tracking entities.
I would add it as a private member
Like Luke says this question has been asked numerous times on SO.
For a web application, per request cycle seems to work best. Singleton is definitely a bad idea.
Per request works well because one web page has a User, maybe some Projects belonging to that user, maybe some Messages for that user. You want the same ObjectContext so you can go User.Messages to get them, maybe mark some messages as read, maybe add a Project and then either commit or abandon the whole object graph at the completion of the page cycle.
Late post here by 7 months. I am currently tackling this question in my app and I'm leaning towards the #LukLed solution by creating a singleton ObjectContext for the duration of my HttpRequest. For my architecture, I have several controls that go into building a page and these controls all have their own data concerns that pull read-only data from the EF layer. It seems wasteful for them to each create and use their own ObjectContext's. Besides, there are a few situations where one control may pull data into the Context that could be reused by other controls. For instance, in my masterpage, my header at the top of the page has user information that can be reused by the other controls on the page.
My only worry is that I may pull entities into the context that will affect the queries of other controls. I haven't seen that yet but don't know if I'm asking for trouble. I guess we'll see!
public class DBModel {
private const string _PREFIX = "ObjectContext";
// DBModel.GetInstance<EntityObject>();
public static ObjectContext GetInstance<T>() {
var key = CreateKey<T>();
HttpContext.Current.Items[key] = HttpContext.Current.Items[key] ?? Activator.CreateInstance<T>();
return HttpContext.Current.Items[key] as ObjectContext;
}
private static string CreateKey<T>() {
return string.Format("{0}_{1}", _PREFIX, typeof(T).Name);
}
}

How to best create a test DB when doing TDD?

what's the best practice for creating test persistence layers when doing an ASP.NET site (eg. ASP.NET MVC site)?
Many examples I've seen use Moq (or another mocking framework) in the unit test project, but I want to, like .. moq out my persistence layer so that my website shows data and stuff, but it's not coming from a database. I want to do that last. All the mocking stuff I've seen only exists in unit tests.
What practices do people do when they want to (stub?) fake out a persistence layer for quick and fast development? I use Dependency Injection to handle it and have some hard coded results for my persistence layer (which is really manual and boring).
What are other people doing? Examples and links would be awesome :)
UPDATE
Just a little update: so far I'm getting a fair bit of mileage out of having a fake repository and a SQL repository - where each class implements an interface. Then, using DI (I'm using StructureMap), I can switch between my fake repository or the SQL repository. So far, it's working well :)
(also scary to think that I asked this question nearly 11 months ago, from when I'm editing this, right now!)
Assuming you're using the Repository pattern from Rob Conery's MVC Store Front:
http://blog.wekeroad.com/mvc-storefront/mvc-storefront-part-1/
I followed Rob Conery's tutorial but ran into the same want as you. Best thing to do is move the Mock Repositories you've created into a seperate project called Mocks then you can swap them out pretty easily with the real ones when you instantiate your service. If your feeling adventurous you could create a factory that takes a value from the config file to instantiate either a mock or a real repository,
e.g.
public static ICatalogRepository GetCatalogRepository(bool useMock)
{
if(useMock)
return new FakeCatalogRepository();
else
return new SqlCatalogRepository();
}
or use a dependency injection framework :)
container.Resolve<ICatalogRepository>();
Good luck!
EDIT: In response to your comments, sounds like you want to use a list and LINQ to emulate a db's operations e.g. GetProducts, StoreProduct. I've done this before. Here's an example:
public class Product
{
public int Identity { get; set; }
public string Name { get; set; }
public string Description { get; set; }
//etc
}
public class FakeCatalogRepository()
{
private List<Product> _fakes;
public FakeCatalogCatalogRepository()
{
_fakes = new List<Product>();
//Set up some initial fake data
for(int i=0; i < 5; i++)
{
Product p = new Product
{
Identity = i,
Name = "product"+i,
Description = "description of product"+i
};
_fakes.Add(p);
}
}
public void StoreProduct(Product p)
{
//Emulate insert/update functionality
_fakes.Add(p);
}
public Product GetProductByIdentity(int id)
{
//emulate "SELECT * FROM products WHERE id = 1234
var aProduct = (from p in _fakes.AsQueryable()
where p.Identity = id
select p).SingleOrDefault();
return aProduct;
}
}
Does that make a bit more sense?
Boring or not, I think you're on the right track. I assume you're creating a fakeRepository that is a concrete implementation of your IRepository which in turn is injected into your service layer. This is nice because at some point in the future when you're happy with the shape of your entities and the behavior of your services, controllers, and views, you can then test drive your real Repositories that will use the database to persist those entities. Of course the nature of those tests will be integration tests, but just as important if not more so.
One thing that may be less boring for you when the time comes to create your real repositories is if you use nHibernate for your persistence you will be able let nhibernate generate your database after you create the nhibernate maps for your entities, assuming you don't have to use a legacy schema.
For instance, I have the following method that is called by my SetUpFixture to generate my db schema:
public class SchemaBuilder
{
public static void ExportSchema()
{
Configuration configuration = new Configuration();
configuration.Configure();
new SchemaExport(configuration).Create(true, true);
}
}
and my SetUpFixture is as follows:
[SetUpFixture]
public class SetUpFixture
{
[SetUp]
public void SetUp()
{
SchemaBuilder.ExportSchema();
DataLoader.LoadData();
}
}
where DataLoader is responsible for creating all of my seed data and test data using the real respoitory.
This probably doesn't answer your questions but I hope it serves to reassure you in your approach.
Greg
Although I'm not using Asp.Net or the MVC framework I do have the need to test services without hitting the database. Your question triggered the writing up of a short (ok, maybe not so short) summary of how I do it. Not claiming it's the best or anything, but it works for us. We access data through a repository and when required we plug in an in-memory repository as explained in the post.
http://blogs.microsoft.co.il/blogs/kim/archive/2008/11/14/testable-data-access-with-the-repository-pattern.aspx
I am using a complete in memory database with SQLite and ActiveRecord. Basically we delete and re-create the database before every integration test is being run, so that the data is always in a known state. The contents of the database are inserted through code. So an example would be like this:
ActiveRecord.Initalize(lots of parameters)
ActiveRecord.DropSchema();
ActiveRecord.CreateSchema();
and then we just add lots of customers or whatever, DDD style:
customerRepository.Save(customer);
Another way to solve this could be using NDbUnit to maintain the state of the database.
I know this question is a bit old, but I've finally come up with an answer :)
Firstly, use RavenDb (Embedded). It's part of the RavenDb Document Database. Its a fully in memory database and works perfectly with unit tests :) I've done it with MSTest, NUnit and xUnit.
Secondly, you can use NHibernate with SqlLite if you don't want to use RavenDb. Ayende has a post about using this.
I've gone the route of creating tables and data during a setup method in a unit test class, running tests, then doing clean up during the teardown. Yes, this method works, but if you really end up using your unit tests for debugging purposes, invariably you will run the setup, debug something then stop in the middle without doing the teardown. It's very brittle and you will probably end up (in the long run) with bad data in your test database and/or unusable unit tests. I personally think its best to mock the database layer using a mocking framework. I do understand that sometimes it's best to do logic in the database. For these cases you can use a tool like DBFit to write tests for your database layer.

Resources