I was looking through an old project and wanted to see if anyone had a suggestion on how to hide certain methods from being called by various layers. This was a 3 tier project, webapplication -> web service -> database
In the application there is a User object for example. When a User was being updated, the webapplication would create a User object and pass it to the webservice. The webservice would use the DataAccessLayer to save the User object to the database. After looking at this I was wondering if instead I should have made a Save method in the User class. This way the service and simply call the Save on the User object which would trigger the db update.
However doing it this way would expose the Save to be called from the webapplication as well, correct? Since the webapplication also has access to the same User object.
Is there anyway around this, or is it better to avoid this altogether?
There is a separation of concerns by keepeing the User object as object that only holds data with no logic in it. you better keep it separated for the following reasons:
As you stated, it is a bad practice since the Save' functionality will be exposed to other places/classes where it is irrelevant for them (This is an important for programming generally).
Modifying the service layer - I guess you are using WCF web service as you can transfer a .NET object (c#/VB) to the service via SOAP. If you put the saving logic in the 'User' object, you can't replace it another webservice that receives a simple textual data structures like JSON or XML or simply doesn't support .NET objects.
Modifying the data storage layer - If you want, for example, to store the data inside a different place like other database such as MongoDB, RavenDB, Redis or what ever you want, you will have to reimplement each class that responsible for updating the data. This is also relevant for Unit Testing and Mocking, making them more complicated to interrogate.
Related
I use Fluent NHibernate code to create a MySQL database SessionFactory. No config files (just one value for the connection string in configuration - connectionStrings section of configuration file).
The SessionFactory creation code is contained in a Data tier class: SessionFactoryManager, which implements a singleton internal SessionFactory which is used by the Data and Business tiers to get all the sessions via SessionFactoryManager.OpenSession().
Some of my Business tier methods internally call SessionFactoryManager.OpenSession() to create sessions in a way that is transparent to the Presentation layer. So, when calling this methods there is no parameter or return value involving a session (to keep the Presentation layer "session-agnostic" when using those Business tier methods).
My problem comes when I write the integration tests for the Business layer: I would like to make them run on a SQLite in-memory database. I create a SessionFactoryManager which uses Fluent configuration to configure the SQLite database.
But when testing those methods that internally create the session, I can not tell them to use my testing SessionFactory (configured to use SQLite). So the "real" SessionFactory is called, and so the MySql database is used, not the SQLite.
I'm thinking of several possible solutions, but none of them seems right.
I could migrate the NHibernate configuration in Data layer to config files, and make different NHibernate config files for development/production and test environments, but I really would prefer to keep on with Fluent code.
I could also modify my Data layer to use a single configuration value, databaseMode or similar, that sets the database to be used: testing in-memory or the real one. And write some switch(databaseMode) statements like "case inMemory: { ... fluent code for in-memory SQLite... } case standard: { ... fluent code for standard database ... }". I don't like this approach at all, I don't want to modify my Data tier code functionality just for testing purposes.
Notice that I'm not testing Data layer, but Business layer. Not interested in testing NHibernate mappings, Dao or similar functionality. I already have unit tests for that, running OK with SQLite database.
Also, changing database is not a requirement of my application, so I'm not quite interested in implementing significant changes that allow me to dynamically change the DBMS, I only came to this need in order to write the tests.
A significant point: when using in-memory SQLite the database connection must be the same for all new sessions, otherwise the database objects are not available to the new sessions. So when creating a new session with SessionFactory.OpenSession() a parameter "connection" must be provided. But this parameter should not be used with non in-memory database. So the switch(databaseMode) should be used for any single session creation! Another Data layer code change that I don't like at all.
I'm seriously considering giving up and running my tests with the real database, or at least on an empty one, with its objects created and dropped for any test execution. But with this the test execution will surely be slower. Any ideas? Thanks in advance.
Finally my solution was Inversion Of Control: I changed my data tier so I can inject a custom SessionFactoryBuilder class that makes the Fluently.Configure(...) magic.
In my data tier I use the "real" MySqlSessionFactoryBuilder, in my test projects I write TestMySqlFactoryBuilder or TestSQLiteSessionFactoryBuilder classes, or whatever I need.
I still have problems with SQLite feature that requires that the same connection is used for all sessions, and must be passed as a parameter in every ISession.Open() call. By the moment I have not modified my data tier to add that feature, but I would like to do it in the future. Probably by adding to my SessionFactory singleton a static private member to store the connection used to make SchemaExport, and a static private boolean member like PreserveConnection to state that this connection must be stored in that private member and used in every ISession.Open(). And also wrap ISession.Open() and make sure that no session is opened directly.
I'm very new at WCF (and .NET in general), so I apologize if this is common knowledge.
I'm designing a WCF solution (currently using Entity Framework to access the database). I want to grab a (possibly very large) set of data from the database, and return it to the client, but I don't want to serialize the entire set of data over the wire all at once, due to performance concerns.
I'd like to operation to return some sort of object to the client that represents the resulting data and I'd like to deal with that data on the client, being able to navigate through it backwards and forwards and retrieve the actual data over the wire as needed.
I don't want to write a lot client code to individually find out what rows meet my search criteria, then make separate calls to get each record if I can help it. I'm trying to keep the client as simple as possible.
Ideally, I'd like to write the client code similar to something like the below pseudocode:
Reference1.Service1Client MyService = new Reference1.Service1Client("Service1");
DelayedDataSet<MyRecordType> MyResultSet = MyService.GetAllCustomers();
MyResultSet.First();
while (!MyResultSet.Eof)
{
Console.Writeline(MyResultSet.CurrentRecord().CUSTFNAME + " " + MyResultSet.CurrentRecord().CUSTLNAME);
Console.Writeline("Press Enter to see the next customer");
Console.Readline();
MyResultSet.Next();
}
Of course, DelayedDataSet is something I just made up, and I'm hoping something like it exists in .NET.
The call to MyService.GetAllCustomers() would return this DelayedDataSet object, with would not actually contain the actual records. The actual data wouldn't come over the wire until CurrentRecord() is called. Next() and Previous() would simply update a cursor on the server side to point to the appropriate record. I don't want the client to have any direct visibility to the database or Entity Framework.
I'm guessing that the way I wrote the code probably won't work over WCF, and that the functions like CurrentRecord(), Next(), First(), etc. would have to be separate service contract operations. I guess I'm just looking for a way to do this without having to write all my own code to cache the results on the server, somehow persist the data sets server side, write all the retrieval and navigation code in my service library, etc. I'm hoping most of this is already done for me.
It seems like this would be a very commonly needed function. So, does something like this exist?
-Joe
No, that's not what WCF is designed to do.
In WCF, the very basic core architecture is that you have a client and a server, and nothing but (XML-)serialized data going between the two over the wire.
WCF is not a remote-procedure call method, or some sort of remote object mechanism - there is no connection between the client and the server except the serialized message that conforms to the service (and data) contracts defined between the two.
WCF is not designed to handle huge data volumes - it's designed to handle individual messages (GetCustomerByID(42) and such). Since WCF is from the ground up designed to be interoperable with other platforms (non - .NET, too - like Java, Ruby etc.) you should definitely not be using heavy-weight .NET specific types like DataSet anyway - use proper objects.
Also, since WCF ultimately serializes everything to XML and send it across a wire, all the data being passed must be expressible in XML schema - which excludes interfaces and/or generics.
From what I'm reading in your post, what you're looking for is more of a "in-proc" data access layer - not a service level. So if you want to keep going down this path, you should investigate the repository and unit-of-work patterns in conjunction with Entity Framework.
More info:
MSDN: What is Windows Communication Foundation?
WCF Essentials—A Developer's Primer
Picture of the very basic WCF architecture from that Primer - there's only a wire with a serialized message connecting client and server - nothing more; but serialization will always happen
We are mixing workflows, a workflow using receive activity's more at the end. But at the start we want to pass in some arguments (not using a receive activity!)
Our workflows are already being created and resumed using a dynamic endpoint with IWorkflowCreation and a class derived from WorkflowHostingEndpoint. In the OnGetCreationContext the creationgContext is filled with WorkflowArguments and the workflow runs. At a later part the receive activity's are creating a bookmark which can be resumed with a message. All seems nice.
But in a xamlx there are no WorkflowArguments, i understand why, except that i want them anyway. I though about an activity in which i can write some code to get the Arguments myself, but i do need some help here.
Or is there another way to pass along the WorkflowArguments into a xamls without using Messaging?
You can't pass arguments into a starting workflow service except through the SOAP message that starts it. But there is nothing preventing you from reading any properties in your workflow service. So it is perfectly fine to do read settings or something similar instead of passing them in at startup.
We have solved this exact situation by creating another WCF service which sits alongside our xamlx service on a slightly different url (e.g. /WorkflowMetadata) and this is where we implement a service method that returns a dictionary of string, type.
In the implementation of this service we simply read the xamlx and determine the arguments.
This is what we use to interrogate a target workflow in an activity designer when creating something like a launch-workflow activity.
Creating an activity will not work as that activity will need an instance in order to run. All you want is some metadata about the xamlx service. And if you are using a WorkflowCreationEndpoint to construct a creation context then you are probably only allowing a dictionary of string, object as the start parameters. Therefore standard metadata will not work. This left us with the only option being to provide another service beside the workflow which serves metadata.
Background here: http://blog.petegoo.com/index.php/2011/09/02/building-an-enterprise-workflow-system-with-wf4/
I am in the process of updating a website for the third time in in 2 years, looks like this is going to happen all of the time and several websites are using the same DB. I want to use the same code for all of them and keep it easy to update in the future. So I plan on writing some interfaces and then place the business login in a service to keep things consistent across the board and add in some unit testing.
So I am looking at my current repositories and I am not sure what should be in my Interface and what should be in my Service.
For example I have an Add method - no brainer I have an Add in the interface and an add in the Service.
Then I have an AuthenticateAccountManager method that takes 3 parameters, should this be in both or just the Service and have a simple Get method in my interface (say by Username) and then do the validation against th other 2 properties in the Service.
I also have a QualifyPartner that sets a bool to true, should this just be in the Service and again have a simple Get method in my Interface, trying to keep that as small as possible?
Following the Separation of Concerns principle- AuthenticateAccountManager is a service-level operation. It should call into your repository, which will return the raw User data. The service then authenticates or not based on what is returned by the repository.
The general guideline is that the repository is responsible for retrieval and committing of data only. Interpreting and executing behaviors based on the data is business logic.
I've read some books on creating stateless websites, I've read some about stateful client applications, but a lot of complexity comes along when you have to combine both. We have a Flex application that needs to persist data to a database via .NET services. Things to keep in mind are:
- Concurrency (optimistic/pessimistic)
- Performance: Flex needs to load in lots of data so lazy-loading is often necessary.
- Do you use Dto's to tranfer data between server and client?
I'll tell you the history of our product. We've used SubSonic from the beginning as a o/r mapper. SubSonic objects are converted to dto's written by us and these dto's are transferred to the client. Clientside the dto's are converted to the domain model. If clientside a domain model object needs to be saved, it is converted back to a dto and send to the server. Server side the dto is converted to a subsonic object and saved to the database.
Now, some time ago, we needed the domain model on the .NET server side... so now we have like three models on the server side, the subsonic model, the dto model and the domain model. The dto model is more simple and resembles the database more, the domain model has much more logic. It gets complex... We now have to synchronize the AS3 domain model code with the C# domain model code. If we could do it again (of get time to refactor) I think we wouldn't use the dto's anymore, but transfer the domain model between client and server. Question is if this is realistic. Dto's are simple objects so easy to transfer. Domain model objects can be very complex.
Are there books on how to create an architecture for these kind of applications? Books writte by someone with lots of experience? Do you have experience with this?
The reality is that sharing objects between the client and the server is quite complex. Here's what you need to make it happen:
The easy/non-scalable way:
Inherit all of your objects from MarshalByrefObject. If you create Object A on the server, and send it to the client, any client modifications to the object will automatically be forwarded to the server.
While this sounds like the perfect solution, it has two major problems:
The client and server are tightly coupled with .NET (bye-bye Web Services)
It can be a performance nightmare. All method/property access will be forwarded to the server. If you choose this route, your objects should really be designed for chunky calls, not chatty ones.
The scalable/hard way:
Instead of using MarshalByRefObject, you would use DataContract/Serializable objects. However:
If you create Object A on the server, and send it to the client,
the client will receive a copy of the object (let's call it Object B)
When you send Object B back to the server, the server will receive a
copy of Object B (let's call it Object C)
But you really want the server to treat Object A and Object C as the same. Unfortunately, the CLR cannot do this, so you'll need an Object Merger to sit on both the client and the server.
The Object Merger would contain a dictionary of all objects within the model, and know how to identify two instances as being the same, and merge any values from the receiving end. For instance, if the client already has Object C in memory, and receives an updated copy from the server, it would copy over the values.
Unfortunately, this is also fraught with problems, because you need to ensure that object references are preserved correctly. You can't just blindly update all properties on an object, because the object may have existing references to other objects, which in turn may require their own merging. On top of all this, you would also need to track added/removed objects contained in lists or dictionaries.
I adding n-tier support to my own framework, so I'm going through the same exercise right now (I'm taking the "scalable/hard" route). Fortunately, I have a lot of the supporting infrastructure in-place to assist with identification, merging, etc. If you're starting from scratch, it would be a significant piece of work.
P.S. Add lazy-loading proxies into the mix (I'm using Nhibernate), and it gets even more interesting...
Go read anything by Fowler, particularily his design patterns stuff (especially the assembler pattern and why you need what you are already doing)
Fowler's Patterns Of Enterprise Application Architecture