I have recently added Spork to my development stack and am just loving the increased speed but there are a few tests which perform differently with or without spork.
The test in question is pretty simple, it tests that one class of object can take two other objects and create a join between them.
it 'should associate object1 with object2' do
#object_under_test.action_being_tested(object1, object2) #action
object1.associated_objects.should be_include(object2) #assertion
end
If I run this test with spork running then this test passes fine. However without spork the assertion fails. I can make it pass with or without spork by reloading object1 before making the assertion.
it 'should associate object1 with object2' do
#object_under_test.action_being_tested(object1, object2) #action
object1.reload
object1.associated_objects.should be_include(object2) #assertion
end
From the point of intended behavior needing to add the reload in these tests is not really a problem as object1 would get reloaded anyway before any calls to its' associated_objects are made.
I just think it odd that the two ways of running the tests have different results. I get the feeling that I'm missing some core bit of knowledge about how spork works! Like, does spork automatically reload objects?
If anyone can shed some light on this for me I'd be really grateful.
Related
Using R Version 4.0.2 and RStudio Version 1.3.1056
This is honestly one of the strangest features I've seen in RStudio, and I suppose there's probably a good reason for it to be there, but I'm currently not seeing it and I feel that it can lead to a lot of issues of misleading data.
Basically, to my understanding, when you create and open an R project in RStudio, RStudio creates a Session with a Global Environment.
Every time you run something, this is added to the Global Environment, I assume it's done as a cached value.
However, I've encountered situations where this feature leads to either:
Outdated/wrong values being shown in my tests.
Cases where a function stops working altogether after changing 1 piece of code, executing the new code, then undoing the change.
functions "bleeding into" other files without importing/sourcing them.
Case 1 and 2 obviously leads to a lot of issues while testing. If you try to run a test like
test <- someFunction()
test #to display the value of the test
If the code is correct, the test will execute and the results of test will be stored in the Global Environment.
However, if you then proceed to break the code and run the test again, since test already has a value stored in Global Environment, that old value will print, even though the function failed and thus didn't return anything. Of course if you go above on the console feed, you might run into a line after test <- someFunction() saying "someFunction failed for X reason", but I still think it's both pretty misleading and not very intuitive. Sometimes the result of a function is really large and it's complicated to scroll all the way up the console to see if the code exited with an error, whereas other IDEs would simply immediately tell you at the end of the console that the code failed, and not print the old and outdated value.
Example: Running the proper code.
Running the code after having changed is.na to the non-existent is.not.na.
Notice how it's still printing the old value belonging to the previous version of the function.
The third case can also lead to misleading scenarios.
If you execute a function in any file within your session, the function is stored in the Global Environment. This allows you to call the function from any other file, even if you haven't added a source statement at the top to load the file containing that function.
Once again this can lead to cases where you inadvertently change/add a new function on file B without running it, then try to invoke the function from file A and you get unexpected results because you were actually invoking the old/outdated function, and the Global Environment has no idea about the changes to the old one or the new function.
All of these issues are rather easy to fix, but I think that's a bit beyond the point. Why is this a feature in general? Why isn't the Global Environment emptied upon errors in execution? I know that you can manually empty the GE whenever you want, but it seems odd to me that the IDE doesn't do it on its own, or, to my knowledge, that it doesn't provide you with an option for it to do it.
I can imagine that it provides some benefit at runtime, but is it really that significant that it can justify these behaviors?
I found many languages provides some way to change code runtime. Many people ask queries regarding how to change code in this or that language runtime. Here I mean by change code is that rewrite code itself at runtime by using reflection or something else.
I have around 6 year of experience in Java application development. I never come again any problem where I have to change code at time.
Can anyone explain why we require to change code at runtime?
I have experienced three huge benefits of changing code at runtime:
Fixing bugs in a production environment without shutting down the application server. This allowed us to fix bugs on just some part of the application without interrupting the whole system.
Possibility of changing the business rule without having to deploy a new version of the application. A quicker deploy of features.
Writing unit test is easier. For example, you can mock dependencies, add some desired behaviour to some objects and etc. Spock Framework does well this.
Of course, we had this benefits because we have a very well defined development process on how to proceed on this situations.
At times you may need to call a method based on the input, that was received earlier in the program.
It could be used for dynamic calculation of value based on the key index, where every key is calculated in a different way or calculation requires fetching required data from different sources. Instead of using switch statement you can invoke a method dynamically using methodName+indexOfTheKey.
I'm currently learning TDD and have finished reading a book. Now that I'm about to start an ASP.NET MVC project, it seems its hard to do the "no code without failing test rule", at least in terms of starting.
Should I add the needed folders at the start like Controllers, and any other infrastructure-related files? Just add them? It seems its hard to start, does everything need to fail a test first? And how do I do it for the front-end? It seems its quite complicated when its not the business logic being tested. Could you guys point me to some resources for employing TDD for Views/Front-end?
In order to have a failing test, the test must be run. In order to be able to run the test, the unit test project must be able to compile. Therefore you should add the Controllers folder and at least a controller and an action. This is, IMHO, the bare minimum.
Write the first action with a single throw new NotImplementedException(); line of code.
Then write a test that exercises the method and asserts that the result is not null or of a given type (for example ViewResult type).
Now run the test and see it fails. The TDD supporters will cheer in the arena.
Next you can replace the single line of code (the not implemented exception) with a bit of actual code and repeat.
I’m curious to know how feasible it is to get away from the dependency onto the application’s internal structure when you create an automated test case. Or you may need to rewrite the test case when a developer modifies a part of the code for a bug fix, etc.
We could write several automated test cases based on the applications internal object structure, but lets assume that the object hierarchy changes after 6 months or so, how do we approach these kind of issues?
I can't speak for other testing tools but at least in QTP's case the testing tool introduces a level of abstraction over the application so that non-functional changes in the application often (but not always) have no effect on the way the testing tool identifies the object.
For example in QTP all web elements are considered to be direct children of the document so that changes in the DOM (such as additional tables) don't change the object's description.
In TestComplete, there are a couple of ways to make sure that the changed app structure does not break you tests.
You can set up the Aliases tree of the Name Mapping feature. In this case, if the app structure is changed, you need to modify the Aliases tree appropriately and your test will stay working without requirement to modify them.
You can use the Extended Find feature of the Name Mapping in order to ignore parts of the the actual object tree and search for a needed objects on deeper levels.
This is what I was forced to do after losing all my work twice due to changes on the DOM structure:
Every single time I need to work with an object, I use the Find function with the ID of the object, searching for the object on the Page object. This way, whenever the DOM gets updated, my tests still run smoothly.
The only thing that will break my tests is if the object's ID get changed, but that's not very probable to happen.
Here you can find some examples of the helper functions I use.
iam begineer in ejb's.I have one doubt in ejb 2.0,In session beans,i will create() with out args in EJBhome.But i didn't define any methods i.e., ejbcreate and ejbremove in the bean.So,Can i compile or run this code without those method in the bean?.
You can compile it but cannot run it. You must have a matching ejbCreate() method in your bean class.
If you are very new to EJB I recommend testing your code with OpenEJB (here's a getting started video). Not because I work on the project (which I do), but because we aggressively check code for mistakes and will print clear errors as to what you might have done wrong.
The output can come in 3 levels of verbosity. On the most verbose level, the output is more email response oriented and error messages include information like "put code like this -code-sample- in your bean." The code samples even try to use your method names and parameter names where possible.
As well it is compiler style. Meaning if you made the same mistake in 10 places, you will see all 10 in the first run and then have the opportunity to fix them all at once. Rather than the traditional style of fix 1 issue, compile, test, get same error elsewhere in code, repeat N times.
And of course you can still deploy into another EJB container. Sounds like you are stuck using a pretty old one if you have to use EJB 2.0.
Here's a list of some of the mistakes that are checked