I have the following problem with my EMF based Eclipse application:
Undo works fine. Validation works fine. But when there is a validation error for the data in a GUI field, this blocks the use of the undo action. For example, it is not possible to undo to get back to a valid state for that field.
In this picture it is not possible to use undo:
Tools that are used in the application:
Eclipse data binding
UpdateValueStrategys on the bindings for validation
Undo is implemented using standard UndoAction that calls CommandStack.undo
A MessageManagerSupport class that connects the validation framework to the Eclipse Forms based GUI.
The data bindings look like this:
dataBindingContext.bindValue(WidgetProperties.text(...),
EMFEditProperties.value(...), validatingUpdateStrategy, null);
The problem is this:
The undo system works on the commands that change the model.
The validation system stops updates from reaching to model when there are validation errors.
To make undos work when there are validation errors I think I could do one of these different things:
Make undo system work on the GUI layer. (This would be a huge change, it's probably not possible to use EMF for this at all.)
Make the invalid data in the GUI trigger commands that change the model data, in the same way as valid data does. (This would be okay as long as the data can not be saved to disk. But I can't find a way to do this.)
Make the validation work directly on the model, maybe triggered by a content listener on the Resource. (This is a big change of validation strategy. It doesn't seem possible to track the source GUI control in this stage.)
These solutions either seem impossible or have severe disadvantages.
What is the best way to make undo work even when there are validation errors?
NOTE: I accept Mad Matts answer because their suggestions lead me to my solution. But I'm not really satisfied with that and I wish there was a better one.
If someone at some time finds a better solution I'd be happy to consider to accept it instead of the current one!
It makes sense that the Validator protects your Target value from invalid values.
Therefor the target commandstack remains untouched in case of an invalid value.
Why would you like to force invalid values being set? Isn't ctrl + z in the GUI enough to reset the last valid state?
If you still want to set these values to your actual Target model, you can play around with the UpdateValueStrategy.
The update phases are:
Validate after get - validateAfterGet(Object)
Conversion - convert(Object)
Validate after conversion - validateAfterConvert(Object)
Validate before set - validateBeforeSet(Object)
Value set - doSet(IObservableValue, Object)
I'm not sure where the validation error (Status.ERROR) occurs exactly, but you could check where and then force a SetCommand manually.
You can set custom IValidator for each step to your UpdateValueStrategy to do that.
NOTE: This is the solution I ended up using in my application. I'm not really satisfied with it. I think it is a little bit of a hack.
I accept Mad Matts answer because their suggestions lead me to this solution.
If someone at some time finds a better solution I'd be happy to consider to accept it instead of the current one!
I ended up creating an UpdateValueStratety sub-class which runs a validator after a value has been set on the model object. This seems to be working fine.
I create this answer to post the code I ended up using. Here it is:
/**
* An {#link UpdateValueStrategy} that can perform validation AFTER a value is set
* in the model. This is used because undo dosen't work if no model changed in made.
*/
public class LateValidationUpdateValueStrategy extends UpdateValueStrategy {
private IValidator afterSetValidator;
public void setAfterSetValidator(IValidator afterSetValidator) {
this.afterSetValidator = afterSetValidator;
}
#Override
protected IStatus doSet(IObservableValue observableValue, Object value) {
IStatus setStatus = super.doSet(observableValue, value);
if (setStatus.getSeverity() >= IStatus.ERROR || afterSetValidator == null) {
return setStatus;
}
// I used a validator here that calls the EMF generated model validator.
// In that way I can specify validation of the model.
IStatus validStatus = afterSetValidator.validate(value);
// Merge the two statuses
if (setStatus.isOK() && validStatus.isOK()) {
return validStatus;
} else if (!setStatus.isOK() && validStatus.isOK()) {
return setStatus;
} else if (setStatus.isOK() && !validStatus.isOK()) {
return validStatus;
} else {
return new MultiStatus(Activator.PLUGIN_ID, -1,
new IStatus[] { setStatus, validStatus },
setStatus.getMessage() + "; " + validStatus.getMessage(), null);
}
}
}
Related
I'm writing part of a cross-platform application, where we mostly use REST (jersey) and Hibernate to communicate between systems. I'm new to JavaFX, but my side of the program should use it to get input values from users. Here is how the code flow would look:
public class startingClass{
...
public void startingMethod(Payload payload){
//send REST requests to different places with different payloads, like:
Response response = Utility.sendPostRequest(URI, payload2);
something = response.readEntity(something.class)
//more processing with the returned values
...
}}
In one of the places where I sent a request:
#Path("something")
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaType.APPLICATION_JSON + ";charset=UTF-8")
public class Resource{
...
#POST
#Path(something)
public Response doSomething(Payload payload) {
//show JavaFX window with text fields and an okay button
JavaFXClass.launch(JavaFXClass.class);
/* THIS IS WHERE I would need to get back the input values somehow */
//payload3 has the input values I need to send back
return Response.entity(payload3).build();
}}
The JavaFX class extends application and and overrides the (void) start method where I put together the window I want to show and after the button click (if inputs are okay) I close the window.
So the idea is that the startingMethod would have to wait, until the response comes back (maybe return with some default values, if the user doesnt type in anything for a minute - what would be the elegant solution for that?) with the input values. This would guarantee the sync.
If I use more REST or database saves inside the JavaFX class then I can't be sure the values are there by the time I wanna use them in the startingMethod (probably not) and it's probably a really bad looking solution anyway.
What could I do? I dont know much about callback methods in javafx, can those help me here? Thanks!
In the end I moved the JavaFXClass into the Resource class. Meaning Resource class extends Application, overrides start, etc. In the doSomethingMethod I call launch in a try-catch block, catch the IllegalStateException if needed and call start() instead (also in a try-catch block). The textfield input values are stored in a global variable after.
Also in the start() method I havePlatform.setImplicitExit(false);
so the doSomethingMethod() can be called multiple times without a problem, starting the javaFX window. It's not a pretty solution.
I suspect a problem with an old version of the ObjectBuilder which once was part of the WCSF Extension project and meanwhile moved into Unity. I am not sure whether I am on the right way or not so I hope someone out there has more competent thread-safety skills to explain whether this could be an issue or not.
I use this (outdated) ObjectBuilder implementation in an ASP.Net WCSF web app and rarely I can see in the logs that the ObjectBuilder is complaining that a particular property of a class cannot be injected for some reason, the problem is always that this property should never been injected at all. Property and class are changing constantly. I traced the code down to a method where a dictionary is used to hold the information whether a property is handled by the ObjectBuilder or not.
My question basically comes down to: Is there a thread-safety issue in the following code which could cause the ObjectBuilder to get inconsistent data from its dictionary?
The class which holds this code (ReflectionStrategy.cs) is created as Singleton, so all requests to my web application use this class to create its view/page objects. Its dictionary is a private field, only used in this method and declared like that:
private Dictionary<int, bool> _memberRequiresProcessingCache = new Dictionary<int, bool>();
private bool InnerMemberRequiresProcessing(IReflectionMemberInfo<TMemberInfo> member)
{
bool requires;
lock (_readLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo.GetHashCode(), out requires))
{
lock (_writeLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo.GetHashCode(), out requires))
{
requires = MemberRequiresProcessing(member);
_memberRequiresProcessingCache.Add(member.MemberInfo.GetHashCode(), requires);
}
}
}
}
return requires;
}
This code above is not the latest version you can find on Codeplex but I still want to know whether it might be the cause of my ObjectBuilder exceptions. While we speak I work on an update to get this old code replaced by the latest version. This is the latest implementation, unfortunately I cannot find any information why it has been changed. Might be for a bug, might be for performance...
private bool InnerMemberRequiresProcessing(IReflectionMemberInfo<TMemberInfo> member)
{
bool requires;
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo, out requires))
{
lock (_writeLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo, out requires))
{
Dictionary<TMemberInfo, bool> tempMemberRequiresProcessingCache =
new Dictionary<TMemberInfo, bool>(_memberRequiresProcessingCache);
requires = MemberRequiresProcessing(member);
tempMemberRequiresProcessingCache.Add(member.MemberInfo, requires);
_memberRequiresProcessingCache = tempMemberRequiresProcessingCache;
}
}
}
return requires;
}
The use of the hash code looks problematic if you run a very large number of classes / members, as can happen with the singleton approach you mentioned.
The double lock was totally odd in the old one (Only one thread goes into the whole section in all cases). Note that locking as the first thing certainly hurts performance. It is a trade of, notice that instead they create a copy to avoid modifying the list as it is being read.
I have added a cache layer to my project . now I wonder if I could unit test methods that manipulate cache ? or is there a better way to test Layer's logic ?
I just want to check the process , for example :
1- when the item is not in the cache , method should hit the database
2- the next time method should use cache
3- when a change is made to database , cache should be cleared
4- if data retrieved from databse is null , it shouldn't be added to cache
I want to ensure that the logic I have placed into the methods are working as expected .
I'm presuming the cache is a third party cache? If so, I would not test it. You're testing someone else's code otherwise.
If this caching is so important you need to test it, I'd go with an integration or acceptance test. In other words, hit the page(s)/service(s) in question and check the content that way. By the very definition of what you wish to test, this is not a unit test.
On the flip side, if the cache is one you've rolled yourself, you'll easily be able to unit test the functionality. You might want to check out verification based testing in order to test the behavior of the cache, as apposed to actually checking stuff is added/removed from the cache. Check out mocking for ways to achieve this.
To test for behaviour via Mock objects (or something similar) I'd do the following - although your code will vary.
class Cacher
{
public void Add(Thing thing)
{
// Complex logic here...
}
public Thing Get(int id)
{
// More complex logic here...
}
}
void DoStuff()
{
var cacher = new Cacher();
var thing = cacher.Get(50);
thing.Blah();
}
To test the above method I'd have a test which used a mock Cacher. You'd need to pass this into the method at runtime or inject the dependency into the constructor. From here the test would simply check that cache.Get(50) is invoked. Not that the item is actually retrieved from the cache. This is testing the behavior of how the cacher should be used, not that it is actually caching/retrieving anything.
You could then fall back to state based testing for the Cacher in isolation. E.g you add/remove items.
Like I said previously, this may be overkill depending on what you wish to do. However you seem pretty confident that the caching is important enough to warrant this sort of testing. In my code I try to limit mock objects as much as possible, though this sounds like a valid use case.
[Edit]
The main question here loosely translates as 'is Flex multi-threaded'? I have since found out that it is not, so I won't have data mysteriously changing half way through an operation. The code below worked, but made things awkward and confusing. I eventually fixed the problem with an architecture change, eliminating the need to suppress events. As the first commenter suggested.
Infinite loops were eliminated by changing the way events were listened to and performing certain actions explicitly rather than via events.
Collating events was made easier using a command pattern.
Basically, do not use the code below if you come across this page!
[/Edit]
I'm building some Flex applications using a simple, lightweight MVC pattern. Models extend or encapsulate a dispatcher and fire events when updated. I'm stuck with Flex 3.5.
In some situations, I'll want to suppress these events to avoid infinite loops or help collate multiple actions into a single event.
My first stab at a solution that doesn't litter the models with unnecessary and confusing code is this:
private var _suppressEvents:Boolean = false;
public function suppressEvents(callback:Function):void
{
// In case of error, ensure the suppression is turned off, then re-throw
var err:Error = null;
_suppressEvents = true;
try
{
callback();
}
catch(e:Error)
{
err = e;
}
_suppressEvents = false;
if (err)
{
throw (err);
}
}
public function dispatch(type:String, data:*):void
{
// Suppress if called from a suppress callback.
if (!_suppressEvents)
{
_dispatcher.dispatchEvent(new DataEvent(type, data));
}
}
Obviously I call 'suppressEvents' with a function containing the model code I wish to run.
My questions:
1: Is there a chance I could accidentally lose events using this technique?
2: Do I need to think about any other error edge cases when it comes to ensuring I don't accidentally end up in a suppressed state after a call?
3: Is there a cleaner way I'm missing?
Thanks!
i created the following sample method in business logic layer. my database doesn't allow nulls for name and parent columns:
public void Insert(string catName, long catParent)
{
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = catParent;
con.Category.AddObject(cat);
con.SaveChanges();
}
so i unit test this and test for empty name and empty parent will fail. to get around that issue i have to refactor the Insert mathod as following:
public void Insert(string catName, long catParent)
{
//added to pass the test
if(string.IsNullOrEmpty(catName)) throw new InvalidOperationException("wrong action. name is empty.");
long parent;
if(long.TryParse(catParent, out parent) == false) throw new InvalidOperationException("wrong action. parent didn't parsed.");
//real bussiness logic
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = parent;
con.Category.AddObject(cat);
con.SaveChanges();
}
my entire bussiness layer are simple calls to database. so now i'm validating the data again! i already planned to do my validation in UI and test that kind of stuff in UI test units. what should i test in my bussiness logic method other than validation related tasks? and if there is nothing to be unit tested why everybody says "unit test all the layers" and things like that which i found a lot online?
The techniques involved in testing are those that you break down your program into smaller parts (smaller components or even classes) and test those small parts. As you assemble those parts together, you make less comprehensive tests -- the smaller parts are already proven to work -- until you have a functional, tested program, which then you give to users for "user tests".
It's preferable to test smaller parts because:
It's simpler to write the tests. You'll need less data, you only setup one object, you have to inject less dependencies.
It's easier to figure out what to test. You know the failing conditions from a simple reading of the code (or, better yet, from the technical specification).
Now, how can you guarantee that you business layer, simple as it's, is correctly implemented? Even a simple database insert can fail if badly written. Besides, how can you protected yourself from changes? Right know, the code works, but what will happen in the future if the database is changed or someone update the business logic.
However, and this is important, you actually don't need to test everything. Use your intuition (which is also called experience) to understand what needs testing and what doesn't. If you method is simple enough, just make sure the client code is correctly tested.
Finally, you've said that all your validation will occur in the UI. The business layer should be able to validate the data in order to increase reuse in your application. Fail to do that and whenever you or whoever make changes in your code in the future might create new UI and forget to add the required validations.