SimpleInjector equivalent to Unity's AsPerResolve lifetimemanager - unity-container

Unity has an AsPerResolve lifetime manager. Does SimpleInjector have anything similar? What is it's equivalent?
Unity's definition of AsPerResolve is: Indicates that instances should be re-used within the same build up object graph

There is no exact equivalent of Unity's AsPerResolve; or per-object-graph, as it is commonly called. The reason there is no per-object-graph lifestyle in Simple Injector is that it is a very uncommon feature, which can easily cause problems.
In most cases the instance must be scoped per request, such as an HTTP request or WCF operation. With the per-object-graph lifestyle, you can still have multiple instances per request, which can have unwanted side effects and is something that is easily caused incidentally. For instance, it's quite normally to postpone the creation of part of the object graph by using factories, inject a Func<T> in a decorator or something like that. Since the object graph is cut in two parts (or more), it will result in extra per-object-graph instances in that request, but this is something that is actually quite hard to detect.
So the way to simulate the per-object-graph lifestyle with Simple Injector is with a scoped lifestyle, most probably the LifetimeScopeLifestyle.
This means you will have to wrap the call to GetInstance with a call to BeginLifetimeScope(), for instance:
using (container.BeginLifetimeScope())
{
container.GetInstance<SomeRootObject>();
}
This will effectively have the same effect.

I would think SimpleInjector's PerGraph lifestyle would be what you are looking for. Check out the documentation on it.

Related

How thread safe are private member variables across wcf tiers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
At work, our service calls follow the pattern of:
Create a proxy that allows you to hit a service on our business tier
upon hitting the service, it creates a new response instance
instantiates a new instance of one of our business code classes
Assigns the result of calling whatever function on the new instance to the response
Returns the response back through the proxy
So it always looks like this:
Dim someRequest as Request = CreateSomeSortOfRequest()
Dim results as Response = Nothing
using proxy as IResultProxy = OurLibrary.Proxy(Of IResultProxy).Create()
results = proxy.GetResults(request)
End Using
Then:
Dim results as Response = Nothing
Using whateverBusiness as new BusinessClass
results = whateverBusiness.ComputeWhatever(request)
End Using
Return results
Pretty basic stuff, right? Well the guys who have worked there for a little over 20 years now will go on and on about how none of these business classes should ever have any member variables of any kind. Ever. Wanna perform some really complicated operation? Better be prepared to pass 10 to (and I've seen it) 30 parameters.
All of this, to me, seems like bad practice. As long as you remain in that narrow scope, hand off a request to a new instance of a business class, ask it to perform whatever, it performs whatever logic necessary within itself, return the result, and carry on with your day.
I've investigated and we only ever use threading ourselves in one location in the system, and that just fires off different service calls (all of which follow the above pattern). We don't use instance pools, static variables, or anything else like that, especially since we have the above stated issue that we have a running belief that there should never be any class scoped variables.
Am I crazy for thinking that having these classes with extremely tight and locked down entry points (i.e. no outside access to internal variables) is perfectly fine, especially since there is no way to access the instances of the business class outside the scope of the service call? Or are my elders correct for stating that any private member variable in a class is non-threadsafe and should never be used?
I guess I should mention that the business classes pretty much always load some data from the database, try to piece that data together into, often, very deep hierarchal structures, then return (or the opposite; taking the object, breaking it apart, and performing, sometimes, hundreds of database calls to save).
Wanna perform some really complicated operation? Better be prepared to pass 10 to (and I've seen it) 30 parameters
Sounds like they don't want any state (public anyway) on their business classes, an understandably noble vision as it is but rarely does it prove to be useful or practical as a general rule. Instead of 30 parameters, maybe they should pass in a struct or request class.
You could point out to them that in their effort to prevent state, that 10-30 parameters comes with its own set of problems.
As stated in the documentation for the brilliant code analysis tool nDepend:
nDepend:
NbParameters: The number of parameters of a method. Ref and Out are also counted. The this reference passed to instance methods in IL is not counted as a parameter.
Recommendations: Methods where NbParameters is higher than 5 might be painful to call and might degrade performance. You should prefer using additional properties/fields to the declaring type to handle numerous states. Another alternative is to provide a class or structure dedicated to handle arguments passing (for example see the class System.Diagnostics.ProcessStartInfo and the method System.Diagnostics.Process.Start(ProcessStartInfo)) - Holy swiss cheese Batman, tell me more.
It's arguably no different to when the client passes a request object to the WCF service. You are passing request objects aren't you?
OP:
Am I crazy for thinking that having these classes with extremely tight and locked down entry points (i.e. no outside access to internal variables) is perfectly fine
OK it sounds like the system has been around for a while and has had some best practices applied by your elders during its construction. That's good. However such a system is arguably only going to continue being robust as someone follows what-ever rules that were setup...and from what you say sound quite bizarre and somewhat ill-informed.
It might also be an example of accidental architecture where the system is just because it is.
e.g. if someone goes and adds a public method and say some public properties or makes what was a private field public is likely to upset the applecart.
I once had the misfortune of working on a legacy system and though it appeared to run without incident, it was all rather fragile due to the exorbitant amount of public fields. (mind you this was c++!)
Someone could have said:
"well don't touch the public fields"
to which I could reply:
"well maybe we shouldn't make the fields public"
Hence their desire to have no instance fields. The notion that c# classes with "member variables of any kind" is naughty is not the real source of concern. Instead I suspect the problem is that of thread safety and for that they should be looking into how the caller or callers be made thread-safe not the business class in this case.
Enforcing thread safety by not having state, though effective is kind of a sledgehammer approach and tends to annoy other parts of OO sub-systems.
WCF Threading Models
It sounds to me they are performing applying old-school threading protection in WCF where WCF has it's own way of guaranteeing thread-safety in a way quite similar to how the Apartment model was successful for COM.
Instead of worrying about lock()s; and synchronisation, why not let WCF serialise calls for you:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession,
ConcurrencyMode = ConcurrencyMode.Single)]
public partial class MyService: IMyService, IDisposable
{
// ...
}
InstanceContextMode.PerSession essentially tells WCF to create a unique private instance of the service per client proxy. Got two clients calling? Well that means two instances of MyService will be created. So irrespective of what instance members this class has its guaranteed not to trod on the other instance. (note I don't refer to statatics here)
ConcurrencyMode = ConcurrencyMode.Single tells WCF that calls to this service instance must be serialised one after the other and that concurrent calls to the service are not allowed. This ties in with the InstanceContextMode setting.
Just by setting these two but very powerful settings in WCF you have told it to not only create private instances of your WCF service such that multiple clients can't trod on it, but that even if the client shared it's client proxy in a thread and attempted to call one particular service instance concurrently, WCF guarentees that calls to the service will be serialised safely.
What does this mean?
Feel free to add instance fields or properties to your service class
such members won't be trodden on by other threads
when using WCF, there is generally no need for explicit thread locking in your service class (depending on your app, this could apply to subsequent calls. see below)
It does not mean that per-session-single services only ever allow one client at a time. It means only one call per client proxy at a time. Your service will most likely have multiple instances running at a particular moment having a jolly good time in the knowledge that one can't throw stones at the other.
Roll-on effects
As long as you remain in that narrow scope, hand off a request to a new instance of a business class
Since WCF has established a nice thread-safe ecosystem for you, it has a nice follow-on effect elsewhere in the call-stack.
With the knowledge that your service entry point is serialised, you are free to instantiate the business class and set public members if you really wanted to. It's not as if another thread can access it anyway.
Or are my elders correct for stating that any private member variable in a class is non-threadsafe
That depends entirely on how the class is used elsewhere. Just as a well designed business processing layer should not care whether the call stack came from WCF; a unit test; or a console app; there may be an argument for threading neutrality in the layer.
e.g. let's say the business class has some instance property. No drama, the business class isn't spawning threads. All the business class does is fetch some DB data; has a fiddle and returns it to the caller.
The caller is your WCF service. It was the WCF service that created an instance of the business class. But what's that I hear you say - "the WCF service instance is already thread-safe!" Exactly right and thank-you for paying attention. WCF already set up a nice thread safe environment as mentioned and so any instance member in the business class shouldn't get obliterated by another thread.
Our particular WCF thread is the only thread that is even aware of this particular business class's instance.
Conclusion
Many classes in .NET have state and many of those are in private fields. That doesn't mean it's bad design. It's how you use the class that requires thought.
A WinForms Font or Bitmap object has both state; I suspect even with private members; and shouldn't arguably be fiddled with concurrently by multiple threads. That's not a demonstration of poor design by Microsoft's part rather something that should have state.
That's two classes created by people much smarter than you, me and your elders I suspect, in a codebase larger than anything we will ever work on.
I think it is fantastic that you are questioning your elders. Sometimes we don't always get it right.
Keep it up!
See Also
Lowy, Juval, "Programming WCF Services: Mastering WCF and the Azure AppFabric Service Bus", Amazon. The WCF bible - a must read for prior to any serious dabbling into WCF goodness
nDepend, a truly marvelous and powerful code analysis tool. Though one may be forgiven into thinking it's a FxCop-type-tool and though it does support such a feature, it does that and more. It analyses your entire Visual Studio solution (and stand-alone libraries if you wish) investigating coupling for one and excessive use of parameters as another. Be prepared for it pointing out some embarrassing mistakes made by the best of us.
Comes with some groovy charts too that look impressive on any dashboard screen.

API design: is "fault tolerance" a good thing?

I've consolidated many of the useful answers and came up with my own answer below
For example, I am writing a an API Foo which needs explicit initialization and termination. (Should be language agnostic but I'm using C++ here)
class Foo
{
public:
static void InitLibrary(int someMagicInputRequiredAtRuntime);
static void TermLibrary(int someOtherInput);
};
Apparently, our library doesn't care about multi-threading, reentrancy or whatnot. Let's suppose our Init function should only be called once, calling it again with any other input would wreak havoc.
What's the best way to communicate this to my caller? I can think of two ways:
Inside InitLibrary, I assert some static variable which will blame my caller for init'ing twice.
Inside InitLibrary, I check some static variable and silently aborts if my lib has already been initialized.
Method #1 obviously is explicit, while method #2 makes it more user friendly. I am thinking that method #2 probably has the disadvantage that my caller wouldn't be aware of the fact that InitLibrary shouln't be called twice.
What would be the pros/cons of each approach? Is there a cleverer way to subvert all these?
Edit
I know that the example here is very contrived. As #daemon pointed out, I should initialized myself and not bother the caller. Practically however, there are places where I need more information to properly initialize myself (note the use of my variable name someMagicInputRequiredAtRuntime). This is not restricted to initialization/termination but other instances where the dilemma exists whether I should choose to be quote-and-quote "fault tolorent" or fail lousily.
I would definitely go for approach 1, along with an easy-to-understand exception and good documentation that explains why this fails. This will force the caller to be aware that this can happen, and the calling class can easily wrap the call in a try-catch statement if needed.
Failing silently, on the other hand, will lead your users to believe that the second call was successful (no error message, no exception) and thus they will expect that the new values are set. So when they try to do something else with Foo, they don't get the expected results. And it's darn near impossible to figure out why if they don't have access to your source code.
Serenity Prayer (modified for interfaces)
SA, grant me the assertions
to accept the things devs cannot change
the code to except the things they can,
and the conditionals to detect the difference
If the fault is in the environment, then you should try and make your code deal with it. If it is something that the developer can prevent by fixing their code, it should generate an exception.
A good approach would be to have a factory that creates an intialized library object (this would require you to wrap your library in a class). Multiple create-calls to the factory would create different objects. This way, the initialize-method would then not be a part of the public interface of the library, and the factory would manage initialization.
If there can be only one instance of the library active, make the factory check for existing instances. This would effectively make your library-object a singleton.
I would suggest that you should flag an exception if your routine cannot achieve the expected post-condition. If someone calls your init routine twice, and the system state after calling it the second time will be the same would be the same as if it had just been called once, then it is probably not necessary to throw an exception. If the system state after the second call would not match the caller's expectation, then an exception should be thrown.
In general, I think it's more helpful to think in terms of state than in terms of action. To use an analogy, an attempt to open as "write new" a file that is already open should either fail or result in a close-erase-reopen. It should not simply perform a no-op, since the program will be expecting to be writing into an empty file whose creation time matches the current time. On the other hand, trying to close a file that's already closed should generally not be considered an error, because the desire is that the file be closed.
BTW, it's often helpful to have available a "Try" version of a method that might throw an exception. It would be nice, for example, to have a Control.TryBeginInvoke available for things like update routines (if a thread-safe control property changes, the property handler would like the control to be updated if it still exists, but won't really mind if the control gets disposed; it's a little irksome not being able to avoid a first-chance exception if a control gets closed when its property is being updated).
Have a private static counter variable in your class. If it is 0 then do the logic in Init and increment the counter, If it is more than 0 then simply increment the counter. In Term do the opposite, decrement until it is 0 then do the logic.
Another way is to use a Singleton pattern, here is a sample in C++.
I guess one way to subvert this dilemma is to fulfill both camps. Ruby has the -w warning switch, it is custom for gcc users to -Wall or even -Weffc++ and Perl has taint mode. By default, these "just work," but the more careful programmer can turn on these strict settings themselves.
One example against the "always complain the slightest error" approach is HTML. Imagine how frustrated the world would be if all browsers would bark at any CSS hacks (such as drawing elements at negative coordinates).
After considering many excellent answers, I've come to this conclusion for myself: When someone sits down, my API should ideally "just work." Of course, for anyone to be involved in any domain, he needs to work at one or two level of abstractions lower than the problem he is trying to solve, which means my user must learn about my internals sooner or later. If he uses my API for long enough, he will begin to stretch the limits and too much efforts to "hide" or "encapsulate" the inner workings will only become nuisance.
I guess fault tolerance is most of the time a good thing, it's just that it's difficult to get right when the API user is stretching corner cases. I could say the best of both worlds is to provide some kind of "strict mode" so that when things don't "just work," the user can easily dissect the problem.
Of course, doing this is a lot of extra work, so I may be just talking ideals here. Practically it all comes down to the specific case and the programmer's decision.
If your language doesn't allow this error to surface statically, chances are good the error will surface only at runtime. Depending on the use of your library, this means the error won't surface until much later in development. Possibly only when shipped (again, depends on alot).
If there's no danger in silently eating an error (which isn't a real error anyway, since you catch it before anything dangerous happens), then I'd say you should silently eat it. This makes it more user friendly.
If however someMagicInputRequiredAtRuntime varies from calling to calling, I'd raise the error whenever possible, or presumably the library will not function as expected ("I init'ed the lib with value 42, but it's behaving as if I initted with 11!?").
If this Library is a static class, (a library type with no state), why not put the call to Init in the type initializer? If it is an instantiatable type, then put the call in the constructor, or in the factory method that handles instantiation.
Don;t allow public access to the Init function at all.
I think your interface is a bit too technical. No programmer want to learn what concept you have used while designing the API. Programmers want solutions for their actual problems and don't want to learn how to use an API. Nobody wants to init your API, that is something that the API should handle in the background as far as possible. Find a good abstraction that shields the developer from as much low-level technical stuff as possible. That implies, that the API should be fault tolerant.

Dispose & Finalize for collections of properties?

I'm looking at some vb.net code I just inherited, and cannot fathom why the original developer would do this.
Basically, each "Domain" class is a collection of properties. And each one implements IDisposable.Dispose, and overrides Finalize(). There is no base class, so each just extents Object.
Dispose sets each private var to Nothing, or calls _private.Dispose when the property is another domain object. There's a private var that tracks the disposed state, and the final thing in Dispose is GC.suppressFinalize(Me)
Finalize just calls Me.Dispose and MyBase.Finalize.
Is there any benefit to this? Any harm? There are no un-managed resources, no db connections, nothing that would seem to need this.
That strikes me as being a VB6 pattern.
I would bet the guy was coming straight from VB6, maybe in the earlier days of .NET when these patterns were not widely understood.
There also is one case were setting an nternal reference to nothing is useful in a call to Dispose: when the member is marked as Withevents.
Without that, you risk having an uncollected object handling events when it really should not be doing that anymore.
It would seem to me that this is something that is NOT needed at all, especially without un-managed resources and data connections.
If you happen to be able to sanitize and post the code we might be able to get a bit more insight, but realistically I can't see a need for it.
Depending on the size of the objects, and how often they are created/destroyed, it could be to ensure GC can happen as early as possible.
It may be, that this pattern was used in other projects and it continues on without understanding why it was used in the first place. Monkey Gardeners
The only reason that I could see for this -- and this is dubious at best -- is if these things are being created and disposed of higher in the "food chain" and there is a potential for some of these domain classes to have either a limited or unmanaged resource at some point.
Even that is sketchy...it sounds like someone came from an unmanaged background and was looking for the .NET equivalent to managing your memory and came across the IDisposable interface.

Would you consider this a singleton/singleton pattern?

Imagine in the Global.asax.cs file I had an instance class as a private field. Let's say like this:
private MyClass _myClass = new MyClass();
And I had a static method on Global called GetMyClass() that gets the current HttpApplication and returns that instance.
public static MyClass GetMyClass()
{
return ((Global)HttpContext.Current.ApplicationInstance)._myClass;
}
So I could get the instance on the current requests httpapplication by calling Global.GetMyClass().
Keep in mind that there is more than one (Global) HttpApplication. There is an HttpApplication for each request and they are pooled/shared, so in the truest sense it is not a real singleton. But it does follow the pattern to a degree.
So as the question asked, would you consider this at the very least the singleton pattern?
Would you say it should not be used? Would you discourage its use? Would you say it's a possibly bad practice like a true singleton.
Could you see any problems that may arise from this type of usage scenario?
Or would you say it's not a true singleton, so it's OK, and not bad practice. Would you recommend this as a semi-quasi singleton where an instance per request is required? If not what other pattern/suggestion would you use/give?
Have you ever used anything such as this?
I have used this on past projects, but I am unsure if it's a practice I should stay away from. I have never had any issues in the past though.
Please give me your thoughts and opinions on this.
I am not asking what a singleton is. And I consider a singleton bad practice when used improperly which is in many many many cases. That is me. However, that is not what I am trying to discuss. I am trying to discuss THIS scenario I gave.
Whether or not this fits the cookie-cutter pattern of a Singleton, it still suffers from the same problems as Singleton:
It is a static, concrete reference and cannot be substituted for separate behavior or stubbed/mocked during a test
You cannot subclass this and preserve this behavior, so it's quite easy to circumvent the singleton nature of this example
I'm not a .NET person so I'll refrain from commenting on this, except for this part:
Would you say its bad practice like a true singleton.
True singletons aren't 'bad practice'. They're HORRIBLY OVERUSED but that's not the same thing. I read something recently (can't remember where, alas) where someone pointed out the -- 'want or need' vs. 'can'.
"We only want one of these", or "we'll only need one": not a singleton.
"We CAN only have one of these": singleton
That is, if the very idea of having two of that object will break something in horrible and subtle ways, yes, use a singleton. This is true a lot more rarely than people think, hence the proliferation of singletons.
A Singleton is an object, of which, there CAN BE only one.
Objects of which there just happens to be one right now are not singleton.
Since you're talking about a web application, you need to be very careful with assuming anything with static classes or this type of pseudo-singleton because as David B said, they are only shared across that thread. Where you will get in trouble is if IIS is configured to use more than one worker process (configured with the ill-named "Web-Garden" mode, but also the # worker processes can be set in machine.config). Assuming the box has more than one processor, whoever is trying to tweak it's performance is bound to turn this on.
A better pattern for this sort of thing is to use the HttpCache object. It is already thread-safe by nature, but what still catches most people is you object also needs to be thread-safe (since you're only going to probably create the instance and then read/write to a lot of its properties over time). Here's some skeleton code to give you an idea of what I'm talking about:
public SomeClassType SomeProperty
{
get
{
if (HttpContext.Current.Cache["SomeKey"] == null)
{
HttpContext.Current.Cache.Add("SomeKey", new SomeClass(), null,
System.Web.Caching.Cache.NoAbsoluteExpiration, TimeSpan.FromDays(1),
CacheItemPriority.NotRemovable, null);
}
return (SomeClassType) HttpContext.Current.Cache["SomeKey"];
}
}
Now if you think you might need a web farm (multi-server) scale path, then the above won't work as the application cache isn't shared across machines.
Forget singleton for a moment.
You have static methods that return application state. You better watch out.
If two threads access this shared state... boom. If you live on the webserver, your code will eventually be run in a multi-threaded context.
I would say that it is definitely NOT a singleton. Design patterns are most useful as definitions of common coding practices. When you talk about singletons, you are talking about an object where there is only one instance.
As you yourself have noted, there are multiple HttpApplications, so your code does not follow the design of a Singleton and does not have the same side-effects.
For example, one might use a singleton to update currency exchange rates. If this person unknowingly used your example, they would fire up seven instances to do the job that 'only one object' was meant to do.

What are the downsides to static methods?

What are the downsides to using static methods in a web site business layer versus instantiating a class and then calling a method on the class? What are the performance hits either way?
The performance differences will be negligible.
The downside of using a static method is that it becomes less testable. When dependencies are expressed in static method calls, you can't replace those dependencies with mocks/stubs. If all dependencies are expressed as interfaces, where the implementation is passed into the component, then you can use a mock/stub version of the component for unit tests, and then the real implementation (possibly hooked up with an IoC container) for the real deployment.
Jon Skeet is right--the performance difference would be insignificant...
Having said that, if you are building an enterprise application, I would suggest using the traditional tiered approach espoused by Microsoft and a number of other software companies. Let me briefly explain:
I'm going to use ASP.NET because I'm most familiar with it, but this should easily translate into any other technology you may be using.
The presentation layer of your application would be comprised of ASP.NET aspx pages for display and ASP.NET code-behinds for "process control." This is a fancy way of talking about what happens when I click submit. Do I go to another page? Is there validation? Do I need to save information to the database? Where do I go after that?
The process control is the liaison between the presentation layer and the business layer. This layer is broken up into two pieces (and this is where your question comes in). The most flexible way of building this layer is to have a set of business logic classes (e.g., PaymentProcessing, CustomerManagement, etc.) that have methods like ProcessPayment, DeleteCustomer, CreateAccount, etc. These would be static methods.
When the above methods get called from the process control layer, they would handle all the instantiation of business objects (e.g., Customer, Invoice, Payment, etc.) and apply the appropriate business rules.
Your business objects are what would handle all the database interaction with your data layer. That is, they know how to save the data they contain...this is similar to the MVC pattern.
So--what's the benefit of this? Well, you still get testability at multiple levels. You can test your UI, you can test the business process (by calling the business logic classes with the appropriate data), and you can test the business objects (by manually instantiating them and testing their methods. You also know that if your data model or objects change, your UI won't be impacted, and only your business logic classes will have to change. Also, if the business logic changes, you can change those classes without impacting the objects.
Hope this helps a bit.
Performance wise, using static methods avoids the overhead of object creation/destruction. This is usually non significant.
They should be used only where the action the method takes is not related to state, for instance, for factory methods. It'd make no sense to create an object instance just to instantiate another object instance :-)
String.Format(), the TryParse() and Parse() methods are all good examples of when a static method makes sense. They perform always the same thing, do not need state and are fairly common so instancing makes less sense.
On the other hand, using them when it does not make sense (for example, having to pass all the state into the method, say, with 10 arguments), makes everything more complicated, less maintainable, less readable and less testable as Jon says. I think it's not relevant if this is about business layer or anywhere else in the code, only use them sparingly and when the situation justifies them.
If the method uses static data, this will actually be shared amongst all users of your web application.
Code-only, no real problems beyond the usual issues with static methods in all systems.
Testability: static dependencies are less testable
Threading: you can have concurrency problems
Design: static variables are like global variables

Resources