In C# or Java, even though we mark methods as private, they still can be accessed using reflection, or dynamically loading the class. Of course we have to know the method name to get hold of it - still, I was wondering how safe is an application which is meant to secure a private database, bank account, etc. that can still be hacked using reflection.
My question here is why is the Java Reflection API allowed to access the variables/methods even though they are private?
Even if reflection didn't exist, getting data from within a virtual machine would be pretty trivial for a determined hacker. The existence of reflection is an acknowledgement from the creators of these languages that a) it's extraordinarily convenient in some special cases and b) private methods by no means ensure security. Instead, to secure private data such as bank account information, further means of indirection should be used, such as storing the data in a remote database and providing a query mechanism with an aggressive ACL.
Related
In an application using the DDD concepts I am in doubt about who could inject (dependencies) into the constructor of a given class if there is any standard for it.
For example, between the Application, Domain, and Repository layers.
1) A ClientAppService (Application layer) that needs to inject user, should I inject UserApplicationService and from it call UserService (Domain) or inject UserService directly in ClientApplicationService?
2) In the ClientService (domain) should I inject UserService and from it call UserRepository or could I inject UserRepository directly into ClientService?
I'm concerned about cyclic reference if I'm injecting peer classes.
But I also think that I should not inject the Repository of another Entity, because often the methods of the repository have a rule in the service that must be called previously.
Has anyone ever had this question, how do you usually handle it?
Thinking about separation of concerns and allocation of responsibilities, you should inject exactly what your artifact depends upon. This may sound a little obvious, but it goes a little deeper.
Considering your (2) example:
In the ClientService (domain) should I inject UserService and from it call UserRepository or could I inject UserRepository directly into ClientService?
You probably should first ask yourself which capability does your ClientService depend upon?
If it (ClientService) cares about being able to find user from information it (ClientService) currently possesses, it should probably receive the UserRepository directly and be able to find the user on its own.
If it (ClientService) needs a user, but it doesn't possess the information needed to find the user (this information is currently on application layer level), maybe ClientService should receive the User domain object directly, with the repository being used straight from application level.
If it (ClientService) needs some kind of domain-relevant functionality from UserService as part of its operation, then, in that case, the UserService might be directly injected into ClientService.
Other possible discussion on this topic might whether you really need all those Domain Services of if you would be better calling rule-rich Entities/Aggregates straight from the Application Layer, it might make your overall design, injection patterns and boundaries simpler.
Also, many times, you might want to inject factories for your artifact rather than the instantiated ones directly.
Another point might be made about:
But I also think that I should not inject the Repository of another Entity, because often the methods of the repository have a rule in the service that must be called previously.
This might be evidence of some confusion inside your domain. The role of a repository should be around the lines of "finding your domain entity from the universe of possible entities". In that sense, a UserRepository enables you to find users from the users existing in your universe so it should be a pretty isolated operation and shouldn't depend on services or other entities. If a user exists, it should be "findable" (and persistable, as it goes both ways) from the UserRepository.
In that case, you shouldn't worry about "injecting UserRepository in ClientService" from a dogmatic point of view. If the operation in your client service needs to find and use a User Entity, it should be alright for you to do so. What you might worry about is whether your entities/aggregates are well designed or if you have some kind of misplaced responsibilities that might be triggering this "feeling" of "I shouldn't be injecting this into that".
Domain Entities and Value Objects almost never use constructor injection.
This is motivated by separation of concerns; the responsibility of the objects in the domain model is to manage their own in memory representations.
Other capabilities that they may need to do their work are passed to them as arguments.
The typical mechanism for this is the "domain service", described by Evans in chapter 5 of the blue book.
To sketch an example - suppose my order aggregate needs to update its quote when the line items change. I might pass in as an argument an interface that accepts a SKU and returns a Price. As far as Order is concerned, that lookup is happening "somewhere else". It doesn't care about the details. The implementation might load up another aggregate to look up its current state, or send a message to some remote system, or hard code an answer.
Domain Service implementations will often have injected dependencies on capabilities provided by the application or infrastructure layers.
I've read through this article trying to understand why you want a session bean in between the client and entity bean. Is it because by letting the client access entity bean directly you would let the client know exactly all about the database?
So by having middleman (the session bean) you would only let the client know part of the database by implementing the business logic in some certain way. So only part of the database which is relevant to the client is only visible. Possibly also increase the security.
Is the above statement true?
Avoiding tight coupling between the client & the business objects, increasing manageability.
Reducing fine-grained method invocations, leads to minimize method invocation calls over the network, providing coarse-grained access to clients.
Can have centralized security & transaction constraints.
Greater flexibility & ability to cope with changes.
Exposing only required & providing simpler interface to the clients, hiding the underlying complexity and inner details, interdependencies between business components.
The article you cite is COMPLETELY out of date. Check the date, it's from 2002.
There is no such thing anymore as an entity bean in EJB (they are currently retained for backwards compatibility, but are on the verge of being purged completely). Entity beans where awkward things; a model object (e.g. Person) that lives completely in the container and where access to every property of it (e.g. getName, getAge) required a remote container call.
In this time and age, we have JPA entities that are POJOs and contain only data. Don't confuse a JPA entity with this ancient EJB entity bean. They sound similar but are completely different things. JPA entities can be safely send to a (remote) client. If you are really concerned that the names used in your entity reveal your DB structure, you could use XML mapping files instead of annotations and use completely different names.
That said, session beans can still perfectly be used to implement the Facade pattern if that's needed. This pattern is indeed used to give clients a simplified and often restricted view of your system. It's just that the idea of using session beans as a Facade for entity beans is completely outdated.
It is to simplify the work of the client. The Facade presents a simple interface and hides the complexity of the model from the client. It also makes it possible for the model to change without affecting the client, as long as the facade does not change its interface.
It decouples application logic with the business logic.
So the actual data structures and implementation can change without breaking existing code utilizing the APIs.
Of course it hides the data structure from "unknown" applications if you expose your beans to external networks
Hi i am creating an API using WCF. My question can be broken down into two seperate ones
1) I have quite a few calls, for instance i have calls relating to the customer, products, orders, employees.
My question is should all this go into one public interface class e.g
public interface IRestService
public class RestService : IRestService
Or should I have one for each call e.g
public interface ICustomer
public class Customer : ICustomer
public interface IProducts
public class Products: IProducts
2) If you have an API which will be accessed by tens of thousands of users and thousands of users concurrently, how would you set up, what will your web config settings be for instance in terms of throttling. Also what setting would you give your InstanceContextMode, or ConcurrencyMode. Finally what type of binding would it be, bearing in mind websites and mobile phones can access the api.
For the sake of good practice, I would break up the API into separate interfaces so you have the option of splitting them into separate implementations in the future. You can still have just one service class implement all of the interfaces, like this:
public class RestService : ICustomer, IProducts, IOrders
However, it sounds as if you'd probably want to make them separate implementations anyway.
In terms of concurrency settings, ask yourself what resources need to be used on each call. If your service class's constructor can be written without any lengthy startup, then use PerCall. If you need to initialize expensive resources, then I'd recommend InstanceContextMode.Single with ConcurrencytMode.Multiple and make sure you write thread-safe code. Eg: make sure you lock() on any class properties or other shared resources before you use them.
Database connections would not count as "expensive to initialize", though, because ADO will do connection pooling for you and eliminate that overhead.
Your throttling settings will be revealed by testing, as Ladislav mentions. You'd want to stress-test your service and use the results to get an idea of how many machines you'd need to service your anticipated load. Then you'll need a dedicated load balancer to route requests as either round-robin, or something that checks the health of each server. Load balancers can be set up to GET a "systemhealth.asp" page and check the results. If you return an "OK" then that machine stays in the pool, or can be temporarily removed from the pool if it times out or returns any other status.
Your binding would need to be WebHTTPBinding for REST. BasicHTTPBinding is meant for SOAP interfaces and doesn't support [WebGet], for example.
If it doesn't have to be a REST service, then you can get a bit more performance by using NetTcpBinding.
If you really have few operations, single service can be used. Generally services are logical collection of related operations but the number of operations should be limited. Usually if your service have more than 20 operations you should think about refactoring.
Do you plan to use REST service? I guess you do because of your first interface example. In such case you need WebHttpBinding (or similar custom binding) with default InstanceContextMode (PerCall) and ConcurrencyMode (Single) values. Only other meaningful combination for REST service is InstanceContextMode.Single and ConcurrencyMode.Multiple but it will create your service as singleton which can have impact on your service implementation. My rule of thumb: Don't use singleton service unless you really need it.
Throttling configuration is dependend on your service implementation and on performance of your servers. What does thousands concurrent users really mean for you? Processing thousands of requests concurrently requires good server cluster with load balancer or hosting in Azure (cloud). All is also dependend on the speed of processing (operation implementation) and size of messages. The correct setting for MaxConcurrentInstances and MaxConcurrentCalls (should be same for PerCall instancing) should be revealed by performance testing. Default values for service throttling have changed in WCF 4.
We have many instances in our application where we would like to be able to access things like the currently logged in user id in our business domain and data access layer. On log we push this information to the session, so all of our front end code has access to it fairly easily of course. However, we are having huge issues getting at the data in lower layers of our application. We just can't seem to find a way to store a value in the business domain that has global scope just for the user (static classes and properties are of course shared by the application domain, which means all users in the session share just one copy of the object). We have considered passing in the session to our business classes, but then our domain is very tightly coupled to our web application. We want to keep the prospect of a winforms version of the application possible going forward.
I find it hard to believe we are the first people to have this sort of issue. How are you handling this problem in your applications?
I don't think having your business classes rely on a global object is that great of an idea, and would avoid it if possible. You should be injecting the necessary information into them - this makes them much more testable and scalable.
So rather than passing a Session object directly to them, you should wrap up the information access methods that you need into a repository class. Your business layer can use the repository class as a data source (call GetUser() on it, for example), and the repository for your web app can use session to retrieve the requested information (return _session.User.Identity).
When porting it to winforms, simply implement the repository interface with a new winform-centric class (i.e. GetUser() returns the windows version of the user principal).
In theory people will tell you it's a bad business practice.
In practice, we just needed the data from the session level available in the business layers all the time. :-(
We ended up having different storage engines united under a small interface.
public interface ISessionStorage
{
SomeSessionData Data {get;set;}
...
.. and most of the data we need stored at "session" level
}
//and a singleton to access it
public static ISessionStorage ISessionStorage;
this interface is available from almost anywhere in our code.
Then we have both a Session and/or a singleton implementation
public WebSessionStorage
{
public SomeSessionData Data
{
get { return HttpContext.Current.Session["somekey"] as SomeSessionData;}
set { HttpContext.Current.Session["somekey"] = value;}
}
public WebFormsSessionStorage
{
private static SomeSessionData _SomeSessionData; //this was before automatic get;set;
public SomeSessionData
{
get{ return _SomeSessionData;}
set{ _SomeSessionData=value; }
}
}
On initing the application, the website will do a
Framework.Storage.SessionStorage = new WebSessionStorage();
in Global.asax, and the FormsApp will do
Framework.Storage.SessionStorage = new WebFormsSessionStorage();
I agree with Womp completely - inject the data down from your front-end into your lower tiers.
If you want to do a half-way cheat (but not too much of a cheat), what you can do is create a very small assembly with just a couple POCO classes to store all of this information you want to share across all of your tiers (currently logged-in username, time logged in, etc.) and just pass this object from your front-end into your biz/data tiers. Now if you do this, you MUST avoid the temptation to turn this POCO assembly into a general utility assembly - it MUST stay small or you WILL have problems in the future (trust me or learn the hard way or ask somebody else to elaborate on this one). However, if you have this POCO assembly, injecting this data through the various tiers becomes very easy and since it's POCO, it serializes very well and works nicely with web services, WCF, etc.
I've read some books on creating stateless websites, I've read some about stateful client applications, but a lot of complexity comes along when you have to combine both. We have a Flex application that needs to persist data to a database via .NET services. Things to keep in mind are:
- Concurrency (optimistic/pessimistic)
- Performance: Flex needs to load in lots of data so lazy-loading is often necessary.
- Do you use Dto's to tranfer data between server and client?
I'll tell you the history of our product. We've used SubSonic from the beginning as a o/r mapper. SubSonic objects are converted to dto's written by us and these dto's are transferred to the client. Clientside the dto's are converted to the domain model. If clientside a domain model object needs to be saved, it is converted back to a dto and send to the server. Server side the dto is converted to a subsonic object and saved to the database.
Now, some time ago, we needed the domain model on the .NET server side... so now we have like three models on the server side, the subsonic model, the dto model and the domain model. The dto model is more simple and resembles the database more, the domain model has much more logic. It gets complex... We now have to synchronize the AS3 domain model code with the C# domain model code. If we could do it again (of get time to refactor) I think we wouldn't use the dto's anymore, but transfer the domain model between client and server. Question is if this is realistic. Dto's are simple objects so easy to transfer. Domain model objects can be very complex.
Are there books on how to create an architecture for these kind of applications? Books writte by someone with lots of experience? Do you have experience with this?
The reality is that sharing objects between the client and the server is quite complex. Here's what you need to make it happen:
The easy/non-scalable way:
Inherit all of your objects from MarshalByrefObject. If you create Object A on the server, and send it to the client, any client modifications to the object will automatically be forwarded to the server.
While this sounds like the perfect solution, it has two major problems:
The client and server are tightly coupled with .NET (bye-bye Web Services)
It can be a performance nightmare. All method/property access will be forwarded to the server. If you choose this route, your objects should really be designed for chunky calls, not chatty ones.
The scalable/hard way:
Instead of using MarshalByRefObject, you would use DataContract/Serializable objects. However:
If you create Object A on the server, and send it to the client,
the client will receive a copy of the object (let's call it Object B)
When you send Object B back to the server, the server will receive a
copy of Object B (let's call it Object C)
But you really want the server to treat Object A and Object C as the same. Unfortunately, the CLR cannot do this, so you'll need an Object Merger to sit on both the client and the server.
The Object Merger would contain a dictionary of all objects within the model, and know how to identify two instances as being the same, and merge any values from the receiving end. For instance, if the client already has Object C in memory, and receives an updated copy from the server, it would copy over the values.
Unfortunately, this is also fraught with problems, because you need to ensure that object references are preserved correctly. You can't just blindly update all properties on an object, because the object may have existing references to other objects, which in turn may require their own merging. On top of all this, you would also need to track added/removed objects contained in lists or dictionaries.
I adding n-tier support to my own framework, so I'm going through the same exercise right now (I'm taking the "scalable/hard" route). Fortunately, I have a lot of the supporting infrastructure in-place to assist with identification, merging, etc. If you're starting from scratch, it would be a significant piece of work.
P.S. Add lazy-loading proxies into the mix (I'm using Nhibernate), and it gets even more interesting...
Go read anything by Fowler, particularily his design patterns stuff (especially the assembler pattern and why you need what you are already doing)
Fowler's Patterns Of Enterprise Application Architecture