Will placing constraints on arbitrary method return types violate Bean Validation 1.0? - bean-validation

I'm aware that Bean Validation 1.1 introduces support for validating arbitrary methods.
But will the adding of, say, a #NotNull constraint on an arbitrary method (such as, say, #NotNull public Frobnicator frobnicate()) cause problems in Bean Validation 1.0? Or should such metadata—I hope!—simply be ignored by a Bean Validation 1.0 validator?
(I can test this of course using Hibernate Validator, but that only tells me that this particular implementation either honors this state of affairs or does not; it does not tell me if it was the specification authors' intent to allow the placing of validation constraints on arbitrary methods in Bean Validation 1.0.)

Hibernate Validator 4.x (the reference implementation of BV 1.0) offers its own API for method validation, so method constraints could potentially be validated there.
But solely adding a constraint to a method doesn't cause its validation, instead you need some sort of method interceptor, AOP advice etc. to invoke the validation engine upon method validation. So I don't think you would see any unexpected side effects in BV 1.0.
I'd assume its the same for Apache BVAL which also has a feature for method validation.

In my honest opinion, since the behavior isn't specified in the Validations 1.0 there is no way to tell that all implementations will do it the same.

Related

Sling models vs WCMUSEPOJO

I need an understanding of what wcmusepojo and sling models mean?
I mean I've read that these implementations are to bring together your component and the back-end implementation but what exactly is done in these (wcmusepojo and sling models) vs what is done inside a component sightly code?
Also what is the difference between using wcmusepojo and using sling models?
Sling's implementation of HTL/Sightly has several ways of resolving business-logic classes to be used in HTL scripts, among these:
Regular POJOs
Sling Models
Regular POJOs that extend WCMUsePojo (Javadoc) implement the Use interface and are initialised with the scripting bindings, providing convenience methods for accessing commonly used objects (request, resource, properties, page etc).
Sling Models can also be used outside HTL/Sightly, thus making your business-logic more reusable. They are managed by Sling and can be injected references to other objects using annotations and reflection.
You can find more information and PROs and CONs for each at https://sling.apache.org/documentation/bundles/scripting/scripting-htl.html#picking-the-best-use-provider-for-a-project

Why we extend action class in struts 1.3

I'm currently learning Struts 1.3!
Why do we extend Action class in struts 1.3
public class LoginAction extends Action
And when i look into web.xml file the mapping says
<servlet-name>action</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet</servlet-class>
what is the Significance of ActionServlet class, why ActionServlet is mapped with Action.
Thanks
Simple answers:
Because that's how Struts 1 works.
Because you have to map requests to actions somehow.
Longer answers:
1. In S1 everything depends on subclassing framework classes.(See notes) The base Action class really only provides a small portion of framework functionality (messaging and resources, primarily), along with a small collection of other non-POJO framework classes, notably ActionForm.
In Struts (and in lots of older software) programs were written to implementations, not interfaces. Action is a class, not an interface. Most of the S1 framework's implementation references Action, meaning that in order to fulfill framework method signatures and return values, you must subclass Action.
2. One goal of MVC is to route requests to appropriate handlers. In S1 this is handled by the ActionServlet. It looks at the request and, based on the Struts configuration, determines which action will handle the request. It acts (more or less) as the controller.
For further study: A significant portion of functionality, and a main point of extension, lies further beneath the surface, in the RequestProcessor class. Note that instantiating actions to handle requests requires an Action to be returned. This means that requests must be handled by an Action, although any Action subclass may be used, e.g., your own actions, a framework action like DispatchAction or ForwardAction, etc.
I would add that your questions can be answered by reading the wealth of information available about Struts 1 on its site and on the internet. Since the source is available it's also important to check and verify assumptions against it. I might recommend looking at Struts 1.2 source instead of 1.3 since there's a small layer of functionality added that clouds the basics.
All this said, unless you have a very compelling reason to do so, studying Struts 1 is a waste of time. It's been EOL'd, hasn't been recommended for new projects for years, is written in a pretty old style, etc. Modern Java web frameworks are much, much easier to work with. Struts 1 was written a Pretty Long Time Ago, before inheritance had fallen out of favor, before annotations existed, before marker interfaces were used all over the place, etc.

SimpleInjector equivalent to Unity's AsPerResolve lifetimemanager

Unity has an AsPerResolve lifetime manager. Does SimpleInjector have anything similar? What is it's equivalent?
Unity's definition of AsPerResolve is: Indicates that instances should be re-used within the same build up object graph
There is no exact equivalent of Unity's AsPerResolve; or per-object-graph, as it is commonly called. The reason there is no per-object-graph lifestyle in Simple Injector is that it is a very uncommon feature, which can easily cause problems.
In most cases the instance must be scoped per request, such as an HTTP request or WCF operation. With the per-object-graph lifestyle, you can still have multiple instances per request, which can have unwanted side effects and is something that is easily caused incidentally. For instance, it's quite normally to postpone the creation of part of the object graph by using factories, inject a Func<T> in a decorator or something like that. Since the object graph is cut in two parts (or more), it will result in extra per-object-graph instances in that request, but this is something that is actually quite hard to detect.
So the way to simulate the per-object-graph lifestyle with Simple Injector is with a scoped lifestyle, most probably the LifetimeScopeLifestyle.
This means you will have to wrap the call to GetInstance with a call to BeginLifetimeScope(), for instance:
using (container.BeginLifetimeScope())
{
container.GetInstance<SomeRootObject>();
}
This will effectively have the same effect.
I would think SimpleInjector's PerGraph lifestyle would be what you are looking for. Check out the documentation on it.

Is reflection actually useful apart from reverse engineering?

Languages such as Java and PHP support reflection, which allows objects to provide metadata about themselves. Are there any legitimate use cases where you would need to be able to do something like ask an object what methods it has outside of the realm of reverse engineering? Are any of those use cases actually implemented today?
Reflection is used extensively in Java by frameworks which are leveraged at runtime to operate with other code dynamically. Without reflection, all links between code must be done at compile time (statically).
So, for example, any useful plug-in framework (OSGi, JSPF, JPF), leverages Reflection. Any injection framework (Spring, Guice, etc) leverages Reflection.
Any time you want to write a piece of code that will interact with another piece of code without having that piece of code available when compiling, Reflection is the way forward in Java.
However, this is best left to frameworks and should be encapsulated.
There certainly are good use cases. For example, obtaining developer-provided metadata. Java APIs are increasingly using annotations to provide info about methods/fields/classes and their use. Like input validation, binding to data representations... You could use these at compile-time to generate metadata descriptors and use those, but to do it at runtime would require reflection. Even if you used the metadata descriptors, they'd end up containing things like class, method and field names that'd need to be accessed via reflection.
Another use case: dynamic languages. Take Ruby... It allows you to check up-front whether an object would respond to a method name before trying to call that method. Something like that requires reflection.
Or how about when a class or method name must be provided from outside compiled code, like when selecting an implementation of some API. That's just gonna be a bit of text. Looking up what it resolves to comes down to reflection.
Frameworks like Spring or Hibernate make extensive use of reflection to inspect a class and see the annotations.
Frameworks for debugging, serialization, logging, testing...

Do validators duplicate business logic?

I know it's possible to use validators to check data input in the presentation layer of an application (e.g. regex, required fields etc.), and to show a message and/or required marker icon. Data validation generally belongs in the business layer. How do I avoid having to maintain two sets of validations on data I am collecting?
EDIT: I know that presentation validation is good, and that it informs the user, and that it's not infallible. The fact remains, does it not, that I am checking effectively the same thing in two places ?
Yes, and no.
It depends on the architecture of your application. We'll assume that you're building an n-tier application, since the vast majority of applications these days tend to follow that model.
Validation in the user interface is designed to provide immediate feedback to the end-user of the system to prevent functionality in the lower tiers from ever executing in the first place with invalid inputs. For example, you wouldn't even want to try to contact the Active Directory server without both a user name and a password to attempt authentication. Validation at this point saves you processing time involved in instantiating an object, setting it up, and making an unnecessary round trip to the server to learn something that you could easily tell through simple data inspection.
Validation in your class libraries is another story. Here, you're validating business rules. While it can be argued that validation in the user interface and validation in the class libraries are the same, I would tend to disagree. Business rule validation tends to be far more complex. Your rules in this case may be more nuanced, and may detect things that cannot be gleaned through the user interface. For example, you may enforce a rule that states that the user may execute a method only after all of a class's properties have been properly initialized, and only if the user is a member of a specific user group. Or, you may specify that an object may be modified only if it has not been modified within the last twenty-four hours. Or, you may simply specify that a string value cannot be null or empty.
In my mind, however, properly designed software uses a common mechanism to enforce DRY (if possible) from both the UI and the class library. In most cases, it is possible. (In many cases, the code is so trivial, it's not worth it.)
I don't think client-side (presentation layer) validation is actual, useful validation; rather, it simply notifies the user of any errors the server-side (business layer) validation will find. I think of it as a user interface component rather than an actual validation utility, and as such, I don't think having both violates DRY.
EDIT: Yes, you are doing the same action, but for entirely different reasons. If your only goal is strict adherence to DRY, then you do not want to do both. However, by doing both, while you may be performing the same action, the results of that action are used for different purposes (actually validating the information vs. notifying the user of a problem) , and therefore, performing the same action twice actually results in useful information each time.
I think having good validations at application layer allows multiple benefits.
1. Facilitates unit testing
2. You can add multiple clients without worrying about data consistency.
UI validation can be used as tool to provide quick response times to the end users.
Each validation layer serves a different purpose. The user interface validation is used to discard the bad input. The business logic validation is used to perform the validation based on business rules.
For UI validation you can use RequiredFieldValidators and other validators available in the ASP.NET framework. For business validation you can create a validation engine that validates the object. This can be accomplished by using the custom attributes.
Here is an article which explains how to create a validation framework using custom attributes:
http://highoncoding.com/Articles/424_Creating_a_Domain_Object_Validation_Framework.aspx
Following up on a comment from Fredrik Mörk as an answer, because I don't think the other answers are quite right, and it's important for the question.
At least in a context where the presentation validation can be bypassed, the presentation validations and business validations are doing completely different things.
The business validations protect the application. The presentation validations protect the time of the user, and that's all. They're just another tool to assist the user in producing valid inputs, assuming that the user is acting in good faith. Presentation validations should not be used to protect the business validations from having to do extra work because they can't be relied upon, so you're really just wasting effort if you try to do that.
Because of this, your business validations and presentation validations can look extremely different. For business validations, depending on the complexity of your application / scope of what you're validating at any given time, it may well be reasonable to expect them to cover all cases, and guarantee that invalid input is impossible.
But presentation validations are a moving target, because user experience is a moving target. You can almost always improve user experience beyond what you already have, so it's a question of diminishing returns and how much effort you want to invest.
So in answer to your question, if you want good presentation validation, you may well end up duplicating certain aspects of business logic - and you may well end up doing more than that. But you are not doing the same thing twice. You've done two things - protected your application from bad-faith actors, and provided assistance to good-faith actors to use your system more easily. In contexts where the presentation layer cannot be relied upon, there is no way to reduce this down so that you only perform a task like "only a number please" once.
It's a matter of perspective. You can think of this as "I'm checking that the input is a number twice", or you can think "I've guaranteed that I'm not getting a number, and I've made sure the user realises as early as possible that they're only supposed to enter a number". That's two things, not one, and I'd strongly recommend that mental approach. It'll help keep the purpose of your two different kinds of validations in mind, which should make your validations better.

Resources