return model instance from controller to test class in laravel - phpunit

i am unit testing in laravel with Phpunit. The situation is i have to return a model instance from the controller back to the testing class. There i will use the attributes of that object to test an assertion. How can i achieve that?
Currently i am json encoding that instance into the response. And using it in a way that works but is ugly. Need a clearer way.
This is my test class:
/** #test
*/
function authenticated_user_can_create_thread()
{
//Given an authenticated user
$this->actingAs(factory('App\User')->create());
//and a thread
$thread = factory('App\Thread')->make();
//when user submits a form to create a thread
$created_thread = $this->post(route('thread.create'),$thread->toArray());
//the thread can be seen
$this->get(route('threads.show',['channel'=>$created_thread->original->channel->slug,'thread'=>$created_thread->original->id]))
->assertSee($thread->body);
}
and this is the controller method:
public function store(Request $request)
{
$thread = Thread::create([
'user_id'=>auth()->id(),
'title'=>$request->title,
'body'=>$request->body,
'channel_id'=>$request->channel_id,
]);
if(app()->environment() === 'testing')
{
return response()->json($thread); //if request is coming from phpunit/test environment then send back the creted thread object as part of json response
}
else
return redirect()->route('threads.show',['channel'=>$thread->channel->slug,'thread'=>$thread->id]);
}
As you can see in the test class, i am receiving the object returned from controller in the $created_thread variable. However, controller is returning an instance of Illuminate\Foundation\Testing\TestResponse, so the THREAD that is embedded in this response is not easy to extract. You can see i am doing
--> $created_thread->original->channel->slug,'thread'=>$created_thread->original->id]. But i am sure there is a better way of achieving the same thing.
Can anyone please guide me to the right direction?

PHPUnit is a unit testing suite, hence the name. Unit testing is, by
definition, writing tests for each unit -- that is, each class, each
method -- as separately as possible from every other part of the
system. Each thing users could use, you want to try to test that it --
and only it, apart from everything else -- functions as specified.
Your problem is, there is nothing to test. You haven't created any method with logic which could be tested. Testing controllers action is pointless, as it only proves that controllers are working, which is a Laravel creators thing to check.

Related

Grails 3.3 Multiple Asynchronous GORM calls during integration test without access to the database.

I was writing integration tests in Grails 3.3 with multiple Asynchronous GORM calls when I realized I could not get access to values stored in the database. I wrote the following test to understand what is happening.
void "test something"() {
given:
def instance = new ExampleDomain(aStringField: "testval").save(flush:true)
when:
def promise = ExampleDomain.async.task {
ExampleDomain.get(instance.id).aStringField
}
then:
promise.get() == "testval"
}
My domain class
class ExampleDomain implements AsyncEntity<ExampleDomain> {
String aStringField
static constraints = {}
}
build.gradle configuration
compile "org.grails:grails-datastore-gorm-async:6.1.6.RELEASE"
Any idea what is going wrong? I'm expecting to have access to the datastore during the execution of the async call.
Most likely the given block is in a transaction that hasn't committed. Without seeing the full test class it is impossible to know, however it is likely you have the #Rollback annotation.
The fix is to remove the annotation and put the logic for saving the domain in a separate transactional method. You will then be responsible for cleaning up any inserted data.

Validation errors block undo

I have the following problem with my EMF based Eclipse application:
Undo works fine. Validation works fine. But when there is a validation error for the data in a GUI field, this blocks the use of the undo action. For example, it is not possible to undo to get back to a valid state for that field.
In this picture it is not possible to use undo:
Tools that are used in the application:
Eclipse data binding
UpdateValueStrategys on the bindings for validation
Undo is implemented using standard UndoAction that calls CommandStack.undo
A MessageManagerSupport class that connects the validation framework to the Eclipse Forms based GUI.
The data bindings look like this:
dataBindingContext.bindValue(WidgetProperties.text(...),
EMFEditProperties.value(...), validatingUpdateStrategy, null);
The problem is this:
The undo system works on the commands that change the model.
The validation system stops updates from reaching to model when there are validation errors.
To make undos work when there are validation errors I think I could do one of these different things:
Make undo system work on the GUI layer. (This would be a huge change, it's probably not possible to use EMF for this at all.)
Make the invalid data in the GUI trigger commands that change the model data, in the same way as valid data does. (This would be okay as long as the data can not be saved to disk. But I can't find a way to do this.)
Make the validation work directly on the model, maybe triggered by a content listener on the Resource. (This is a big change of validation strategy. It doesn't seem possible to track the source GUI control in this stage.)
These solutions either seem impossible or have severe disadvantages.
What is the best way to make undo work even when there are validation errors?
NOTE: I accept Mad Matts answer because their suggestions lead me to my solution. But I'm not really satisfied with that and I wish there was a better one.
If someone at some time finds a better solution I'd be happy to consider to accept it instead of the current one!
It makes sense that the Validator protects your Target value from invalid values.
Therefor the target commandstack remains untouched in case of an invalid value.
Why would you like to force invalid values being set? Isn't ctrl + z in the GUI enough to reset the last valid state?
If you still want to set these values to your actual Target model, you can play around with the UpdateValueStrategy.
The update phases are:
Validate after get - validateAfterGet(Object)
Conversion - convert(Object)
Validate after conversion - validateAfterConvert(Object)
Validate before set - validateBeforeSet(Object)
Value set - doSet(IObservableValue, Object)
I'm not sure where the validation error (Status.ERROR) occurs exactly, but you could check where and then force a SetCommand manually.
You can set custom IValidator for each step to your UpdateValueStrategy to do that.
NOTE: This is the solution I ended up using in my application. I'm not really satisfied with it. I think it is a little bit of a hack.
I accept Mad Matts answer because their suggestions lead me to this solution.
If someone at some time finds a better solution I'd be happy to consider to accept it instead of the current one!
I ended up creating an UpdateValueStratety sub-class which runs a validator after a value has been set on the model object. This seems to be working fine.
I create this answer to post the code I ended up using. Here it is:
/**
* An {#link UpdateValueStrategy} that can perform validation AFTER a value is set
* in the model. This is used because undo dosen't work if no model changed in made.
*/
public class LateValidationUpdateValueStrategy extends UpdateValueStrategy {
private IValidator afterSetValidator;
public void setAfterSetValidator(IValidator afterSetValidator) {
this.afterSetValidator = afterSetValidator;
}
#Override
protected IStatus doSet(IObservableValue observableValue, Object value) {
IStatus setStatus = super.doSet(observableValue, value);
if (setStatus.getSeverity() >= IStatus.ERROR || afterSetValidator == null) {
return setStatus;
}
// I used a validator here that calls the EMF generated model validator.
// In that way I can specify validation of the model.
IStatus validStatus = afterSetValidator.validate(value);
// Merge the two statuses
if (setStatus.isOK() && validStatus.isOK()) {
return validStatus;
} else if (!setStatus.isOK() && validStatus.isOK()) {
return setStatus;
} else if (setStatus.isOK() && !validStatus.isOK()) {
return validStatus;
} else {
return new MultiStatus(Activator.PLUGIN_ID, -1,
new IStatus[] { setStatus, validStatus },
setStatus.getMessage() + "; " + validStatus.getMessage(), null);
}
}
}

How to test service object methods are called?

I'm trying to build some tests for my service objects.
My service file is as follows...
class ExampleService
def initialize(location)
#location = coordinates(location)
end
private
def coordinates(location)
Address.locate(location)
end
end
I want to test that the private methods are called by the public methods. This is my code...
subject { ExampleService.new("London") }
it "receives location" do
expect(subject).to receive(:coordinates)
subject
end
But I get this error...
expected: 1 time with any arguments
received: 0 times with any arguments
How to test service object methods are called?
Short answer: Don't test at all
Long answer: After seen Sandi Metz advice on testing, you will be agree, and you will want to test the way she does.
This is the basic idea:
The public methods of your class (the public API), must be tested
The private methods don't need be tested
Summary of tests to do:
The incoming query methods, test the result
The incoming command methods, test the direct public side effects
The outgoing command methods, expect to send
Ignore: send to self, command to self, and queries to others
Taken from the slides of the conference.
In your first example, your subject has already been instantiated/initialized (by being passed to expect, invoking coordinates in the process) by the time you've set expectations on it, so there is no way for the expectation to receive :coordinates to succeed. Also, as an aside, subject is memoized, so there won't be an additional instantiation in the line that follows.
If you want to make sure your initialization calls a particular method, you could use the following:
describe do
subject { FoursquareService.new("London") }
it "receives coordinates" do
expect_any_instance_of(FoursquareService).to receive(:coordinates)
subject
end
end
See also Rails / RSpec: How to test #initialize method?

Unit Test & Log4net

I have unit test testing an action in my controller, the action writes to log4net.
When I run my action it works well - writes to log4net .
However , When I run the unit test - the action doesn't write to log4net but doesn't throw any exception.
Does anyone have a solution?
// ARRANGE
var memoryAppender = new MemoryAppender();
BasicConfigurator.Configure(memoryAppender);
// ACT
_sut.DoWhatever();
// ASSERT - using xunit - change the expression to fit your purposes
Assert.True(memoryAppender.GetEvents().Any(le => le.Level == Level.Warn), "Expected warning messages in the logs");
You don't need to add in another layer of indirection by using a logging interface (if you don't want to). I have used the abstracted way for years, but now am moving towards just using the MemoryAppender as it is testing what is actually happening. Just be sure to .Clear() the appender after each test.
Log4net does not throw exceptions: http://logging.apache.org/log4net/release/faq.html
Writing to an log on disk or in a database in a unit test is counterproductive; the whole point is automation. You shouldn't have to check the logs every time you run tests.
If you truly need to verify that a call was made to log something, you should mock the ILog interface and assert that the appropriate method was called.
If you are using a mocking framework, this is trivial. If you aren't, you can create a TestLogger class that implements or partially implements ILog and exposes extra properties that show how many times a given method was called. Your assertions will check that the methods were called as expected.
Here is an example of a class to be tested:
public class MyComponent
{
private readonly ILog _log;
public MyComponent(ILog log)
{
_log = log;
}
public string DoSomething(int arg)
{
_log.InfoFormat("Argument was [{0}]", arg);
return arg.ToString();
}
}
and the test (using Rhino.Mocks to mock the ILog):
[TestClass]
public class MyComponentTests
{
[TestMethod]
public void DoSomethingTest()
{
var logger = MockRepository.GenerateStub<ILog>();
var component = new MyComponent(logger);
var result = component.DoSomething(8);
Assert.AreEqual("8", result);
logger.AssertWasCalled(l => l.InfoFormat(Arg<string>.Is.Anything, Arg<int>.Is.Equal(8)));
}
}
Try adding:
[assembly: log4net.Config.XmlConfigurator()]
To the AssemblyInfo.cs (or init log4net any other way).
Or try using AssemblyInitialize as suggested in this answer.
It is your log4net configuration. Right now it might be in your web.config or log4net.config file in the web/bin. You have to place it in a common location and make it discoverable by both web app and test. Or you have to put it into your unittest.project=>app.config file. But if you have many test projects, it would be duplicated in number of places. So the ideal would be to put it in a common place.
Here's another possible solution if none of the other solutions work for you...
Try writing your log file to the root of the c drive. By default, I set log4net to write to the current directory which is always the directory the unit test is running from right?... wrong! I'm running windows 8 with vs 2012 using MS Unit Test, and it writes the file to a local temp directory which gets deleted after the unit test completes. In my setup it writes the file to here:
C:\Users\[myself]\AppData\Local\Temp\TestResults
Bottom line, any unit tests I write for now on, are going to use a full absolute log file path and not a relative one.

Best way to implement 1:1 asynchronous callbacks/events in ActionScript 3 / Flex / AIR?

I've been utilizing the command pattern in my Flex projects, with asynchronous callback routes required between:
whoever instantiated a given command object and the command object,
the command object and the "data access" object (i.e. someone who handles the remote procedure calls over the network to the servers) that the command object calls.
Each of these two callback routes has to be able to be a one-to-one relationship. This is due to the fact that I might have several instances of a given command class running the exact same job at the same time but with slightly different parameters, and I don't want their callbacks getting mixed up. Using events, the default way of handling asynchronicity in AS3, is thus pretty much out since they're inherently based on one-to-many relationships.
Currently I have done this using callback function references with specific kinds of signatures, but I was wondering if someone knew of a better (or an alternative) way?
Here's an example to illustrate my current method:
I might have a view object that spawns a DeleteObjectCommand instance due to some user action, passing references to two of its own private member functions (one for success, one for failure: let's say "deleteObjectSuccessHandler()" and "deleteObjectFailureHandler()" in this example) as callback function references to the command class's constructor.
Then the command object would repeat this pattern with its connection to the "data access" object.
When the RPC over the network has successfully been completed (or has failed), the appropriate callback functions are called, first by the "data access" object and then the command object, so that finally the view object that instantiated the operation in the first place gets notified by having its deleteObjectSuccessHandler() or deleteObjectFailureHandler() called.
I'll try one more idea:
Have your Data Access Object return their own AsyncTokens (or some other objects that encapsulate a pending call), instead of the AsyncToken that comes from the RPC call. So, in the DAO it would look something like this (this is very sketchy code):
public function deleteThing( id : String ) : DeferredResponse {
var deferredResponse : DeferredResponse = new DeferredResponse();
var asyncToken : AsyncToken = theRemoteObject.deleteThing(id);
var result : Function = function( o : Object ) : void {
deferredResponse.notifyResultListeners(o);
}
var fault : Function = function( o : Object ) : void {
deferredResponse.notifyFaultListeners(o);
}
asyncToken.addResponder(new ClosureResponder(result, fault));
return localAsyncToken;
}
The DeferredResponse and ClosureResponder classes don't exist, of course. Instead of inventing your own you could use AsyncToken instead of DeferredResponse, but the public version of AsyncToken doesn't seem to have any way of triggering the responders, so you would probably have to subclass it anyway. ClosureResponder is just an implementation of IResponder that can call a function on success or failure.
Anyway, the way the code above does it's business is that it calls an RPC service, creates an object encapsulating the pending call, returns that object, and then when the RPC returns, one of the closures result or fault gets called, and since they still have references to the scope as it was when the RPC call was made, they can trigger the methods on the pending call/deferred response.
In the command it would look something like this:
public function execute( ) : void {
var deferredResponse : DeferredResponse = dao.deleteThing("3");
deferredResponse.addEventListener(ResultEvent.RESULT, onResult);
deferredResponse.addEventListener(FaultEvent.FAULT, onFault);
}
or, you could repeat the pattern, having the execute method return a deferred response of its own that would get triggered when the deferred response that the command gets from the DAO is triggered.
But. I don't think this is particularly pretty. You could probably do something nicer, less complex and less entangled by using one of the many application frameworks that exist to solve more or less exactly this kind of problem. My suggestion would be Mate.
Many of the Flex RPC classes, like RemoteObject, HTTPService, etc. return AsyncTokens when you call them. It sounds like this is what you're after. Basically the AsyncToken encapsulates the pending call, making it possible to register callbacks (in the form of IResponder instances) to a specific call.
In the case of HTTPService, when you call send() an AsyncToken is returned, and you can use this object to track the specific call, unlike the ResultEvent.RESULT, which gets triggered regardless of which call it is (and calls can easily come in in a different order than they were sent).
The AbstractCollection is the best way to deal with Persistent Objects in Flex / AIR. The GenericDAO provides the answer.
DAO is the Object which manages to perform CRUD Operation and other Common
Operations to be done over a ValueObject ( known as Pojo in Java ).
GenericDAO is a reusable DAO class which can be used generically.
Goal:
In JAVA IBM GenericDAO, to add a new DAO, the steps to be done is simply,
Add a valueobject (pojo).
Add a hbm.xml mapping file for the valueobject.
Add the 10-line Spring configuration file for the DAO.
Similarly, in AS3 Project Swiz DAO. We want to attain a similar feet of achievement.
Client Side GenericDAO model:
As we were working on a Client Side language, also we should be managing a persistent object Collection (for every valueObject) .
Usage:
Source:
http://github.com/nsdevaraj/SwizDAO

Resources