I have the following requirement:
I want to test my UI (using graphene and drone) but I need to login to the application before all tests. I have a Database authentication Realm in jboss which for testing uses an H2 in-memory database. So what I need is to add a user (username, password) before all tests in the users table so I can login successfully in my application.
I first tried to inject my users EJB in the test class so I can create a user before all tests in the database. This is impossible because the UI tests run on the client (testable=false). It was obvious that at the time I did not know exactly how arquillian works...
I then tried to use arquillian persistence extension and #UsingDataSet annotation but this also fails for the same reason (although I am not sure why, since I don't know exactly how this annotation works).
Finally, I tried to create a Singleton EJB with #Startup annotation and on its #PostConstruct method create the user I need. When debugging, I can see in the H2 console that the user is created. But when I run my tests the login still fails.
Can someone explain why this last case fails because I do not understand. But most importently if someone knows how to make this work I would greately appreciate it!
So, when you run a client side test nothing gets added to your archive. If it's possible for you, mark your archive as testable=true and then these extensions will be added to the test. Your can do things like have a setup method that runs on the server, however the values for the user will not be sent you. You'll need to insert them. As long as you're ok with that need this should work for you.
Related
I've created a very small symfony2 bundle here: https://github.com/BranchBit/AirGramBundle
Just a simple service, which calls a remote url.
Now, unit-testing-wise, should I even be testing this? Really all that can go wrong, is the remote host not being available, but that's not my code's issue.
If this were your bundle to maintain, what would you suggest, I would like 100% code coverage, however 50% of the code, is just calling the remote url ..
Suggestions?
This test is unnecessary in my opinion. If you try to test it, then you can use WebTestCase. Your service in the Bundle should accept a Symfony\Component\BrowserKit\Client for client implementation. You can choose between Goutte\Client for production firing with cURL or Symfony\Component\HttpKernel\Client for testing. Then you have to dynamically add a route or a request listener.
Doesn't matter how, but the test is more complex than the implementation, so how can you be sure, your test is okay?
when I use the cloudify(2.7) to deploy an application(e.g. an application app includes two services A and B ),I try to use the Admin.addEventListener() to add some eventListener,but it does't work !
I try to add the ProcessingUnitStatusChangedEventListener ,when I debug the code,the value of (ProcessingUnitStatusChangedEvent)event.getNewStatus() changes from SCHEDULED to INTACT,then SCHEDULED,then INTACT again,
I also try to add the ProcessingUnitInstanceLifecycleEventListener,when I debug the code,the status is intact,but the service is not available!
Is there any other listener or method to know the application(not the services) is available,or I use the listener in the wrong way?
First, the Admin API is internal - use it at your own risk. And you should not be using it the way you are - Cloudify adds a lot of logic on top of the internal Admin API.
Second, it is not exactly clear where you are executing your code from.
You can always use the rest client to get an accurate state of the application. Look at https://github.com/CloudifySource/cloudify/blob/master/rest-client/src/main/java/org/cloudifysource/restclient/RestClient.java#L388
In addition, if you are running this code in a service lifecycle event handler, the easiest way to implement this is to have your 'top' level service, the one that should be available last, write an application entry to the shared attributes store in its 'postStart' event. Everyone else can just periodically poll on this entry. The polling itself is very fast, all in-memory operations.
If you do not have a top-level service, or your logic is more complicated then that, you would need to use the Service Context API to scan each service and its instances to see if they are up. An explanation on getting service instance state is available here:
cloudify service dependsOn other service
I have a suite of web tests created for a web service. I use it for testing a particular input method that updates a SQL Database. The web service doesn't have a way to retrieve the data, that's not its purpose, only to update it. I have a validator that validates the response XML that the web service generates for each request. All that works fine.
It was suggested by a teammate that I add data validation so that I check the database to see the data after the initial response validator runs and compare it with what was in the input request. We have a number of services and libraries that are separate from the web service I'm testing that I can use to get the data and compare it. The problem is that when I run the web test the data validation always fails even when the request succeeds. I've tried putting the thread to sleep between the response validation and the data validation but to no avail; It always gets the data from before the response validation. I can set a break point and visually see that the data has been updated in the DB, funny thing is when I step through it in debug with the breakpoint it does validate successfully.
Before I get too much more into this issue I have to ask; Is this the purpose of web tests? Should I be able to validate data through service calls in this manner or am I asking too much of a web test and the response validation is as far as I should go?
That is not asking too much of the test, just make sure the database test gets called after you yield the WebTestRequest for the WebService call.
So in that case, the database check is separate from the call.
Post code for your webtest if there are still issues.
I've just encountered an error in my app that probably could have been caught with some integration tests, so I think its high time I wrote some!
My question relates to the setup of these tests, and at what layer of the code you run the tests against.
Setup
Considering I should have lots of integration tests, I don't want to be creating and dropping a test database for every test, this would be epically slow (even if its an SqlLite in-memory one). My thoughts are:
Have a test db which sits next to my dev db
Before testing, run some reset script which will set up my schema correctly and insert any necessary data (not test case specific)
Simply use this test db as if its the real db.
However, it seems very wasteful that I have to run my Fluent NHib configuration in every [Setup]. Is this just tough? What are my options here?
My session is currently wrapped in a UoW pattern, with creation and destruction performed on begin_request and end_request (MVC web application) respectively. Should I modify this to play well with the tests to solve this problem?
Testing
When it comes to actually writing some tests, how should I do it?
Should I test from the highest level possible (my MVC controller actions) or from the lowest (repositories).
If I test at the lowest I'll have to manually hard code all the data. This will make my test brittle to changes in the code, and also not representative of what will really happen in the code at runtime. If I test at the highest I have to run all my IoCC setup so that dependencies get injected and the whole thing functions (again, repeating this in every [SetUp]?)
Meh! I'm lost, someone point me in the right direction!
Thanks
In regards to creating the session factory, I create a class called _AssemblyCommon in my test project and expose the session factory as a static from there. A method marked with the [SetupFixture] (NUnit) attribute configures the session factory.
In general integration tests should cover CRUD operations in the repositories. To do this, I have one test method per object (or aggregate root) and do an insert, retrieve, update, and delete all within that method. I also test any cascade deletes that I've defined. Keeping these operations in a single method doesn't leave any traces behind in the database. I do have some integration tests that leave behind test data but that hasn't been a problem.
Your higher level operations should be unit tested and mock (I use Moq) the repositories if possible.
In my current MVC app I've found that it's enough to test the interaction between the repositories and the database. More than anything, this is to iron out any wrinkles in the NHibernate mapping. Everything (ok I'm exaggerating when I say everything) above that layer is unit tested in isolation. I did have some integration tests from the controllers all the way down the stack to the database and these used an IoC container (StructureMap) to build the controllers and the dependencies, but I found these tests were really not adding anything and they were quite an overhead to maintain, so I've removed them from the 'integration' tests for now - I may find a reason to put them in but so far I haven't.
Anyway, the test process I use works like this:
The build process for the data access layer test assembly creates the test database via a FluentNHibernate configuration ExposeSchema() call. The build process then runs some NHibernate repository level code to populate reference tables in the database.
Each integration test that runs is then wrapped in a System.Transactions.TransactionScope using() statement and the TransactionScope never has Complete() called, so each test can run in isolation and the results can be setup and verified within the using() scope by an ISession without altering the state of any other test data. e.g.
using (new TransactionScope())
{
var session = NHibernateTestSessionCreator.GetNHibernateSessionForIntegrationTesting();
// This is an NHibernate ISession - setup any dependencies for the repository tests here
var testRepository = new NHibernateGenericRepository<NewsItem>(NHibernateTestSessionCreator.GetNHibernateSessionForIntegrationTesting())
// repository test code goes here - the repository is using a different ISession
// Validate the results here directly with the ISession
}
// at this point the transaction is rolled back and we haven't changed the test data
This means that I don't have to modify the UnitOfWork implementation I'm using - the transaction is rolled back at a higher level.
Other than using a web service, is there anyway to call a method in a web app from a windows application? Both run on the same machine.
I basically want to schedule a job to run a windows app which updates some file (for a bayesian spam filter), then I want to notify the web app to reload that file.
I know this can be done in other ways but I'm curious to know whether it's possible anyway.
You can make your windows app connect to the web app and do a GET in a page that responds by reloading your file, I don't think it is strictly necessary to use a web service. This way you can also make it happen from a web browser.
A Web Service is the "right" way if you want them to communicate directly. However, I've found it easier in some situations to coordinate via database records. For example, my web app has bulk email capability. To make it work, the web app just leaves a database record behind specifying the email to be sent. The WinApp scans periodically for these records and, when it finds one with an "unprocessed" status, it takes the appropriate action. This works like a charm for me in a very high volume environment.
You cannot quite do this in the other direction only because web apps don't generally sit around in a timing loop (there are ways around this but they aren't worth the effort). Thus, you'll require some type of initiating action to let the web app know when to reload the file. To do this, you could use the following code to do a GET on a page:
WebRequest wrContent = WebRequest.Create("http://www.yourUrl.com/yourpage.aspx");
Stream objStream = wrContent.GetResponse().GetResponseStream();
// I don't think you'll need the stream Reader but I include it for completeness
StreamReader objStreamReader = new StreamReader(objStream);
You'll then reload the file in the PageLoad method whenever this page is opened.
How is the web application loading the file? If you were using a dependency on the Cache object, then simply updating the file will invalidate the Cache entry, causing your code to reload that entry when it is found to be null (or based on the "invalidated" event).
Otherwise, I don't know how you would notify the application to update the file.
An ASP.NET application only exists as an instance to serve a request. This is why web services are an easy way to handle this - the application has been instantiated to serve the service request. If you could be sure the instance existed and got a handle to it, you could use remoting. But without having a concrete handle to an instance of the application, you can't invoke the method directly.
There's plenty of other ways to communicate. You could use a database or some other kind of list which both applications poll and update periodically. There are plenty of asynchronous MQ solutions out there.
So you'll create a page in your webapp specifically for this purpose. Use a Get request and pass in a url parameter. Then in the page_load event check for this paremter. if it exists then do your processing. By passing in the parameter you'll prevent accidental page loads that will cause the file to be uploaded and processed when you don't want it to be.
From the windows app make the url Get request by using the .Net HttpWebRequest. Example here: http://www.codeproject.com/KB/webservices/HttpWebRequest_Response.aspx