I am exploring the use of Robot Framework for writing unit tests in Python. We currently have some pre-existing unit tests that were developed using pytest. The unit-tests mock certain functionalities using the mock.patch method (such as connecting to and reading from DBs). Is there an equivalent mock functionality in Robot? Or should one have to write libraries to do it? I am very much a newbie when it comes to Robot and unit tests, so please go gentle on me :)
From a Stack Overflow perspective this question is quite broad. Luckily the answer to this question can be determined from the Robot Framework site:
Robot Framework is a generic test automation framework for acceptance
testing and acceptance test-driven development (ATDD).
Although you can integrate it with a unit test framework through custom Python development, the real question is whether you should. In-line with the above definition I'd recommend not mixing the Unit test layer with your other (Integration, Acceptance
etc) test layers and keeping them separate.
Define a test approach with layers so that each layer builds upon the confidence obtained from the previous ones. This will reduce the scope of testing in each following layer and thus the overall complexity of your test setup.
Related
So, I'm using Junit for Java and I've gotten convinsed of the benefit of TDD. Now I'm working (with a team) on a webapplication for my boss (html/css/php) and proposed to write some tests in PHPunit for the WA. But I can't find any way the code processes data (but the WA does work so far). I started google-ing and found only tutorials who work with functions in the code to test.
My question is: is it even possible to use PHPunit for testing the php-code from between the html-lines?
You're talking about functional testing rather than unit testing. Codeception (built on top of PHPUnit) might be a good place to start for you, since it has more natural functional testing out of the box.
Some example:
https://codeception.com/docs/02-GettingStarted
https://codeception.com/docs/03-AcceptanceTests
What is the difference between these concepts:
development of code
implementation of code
construct a code
and are these concepts correct to use?
These are all the phases involved in order to build a software basically the difference in above one of the phases which you have asked is it starts from designing is what you visually think as a plan then you need to develop it i.e. by writing the code which is core to make your software as functional and finally implementation i.e. deployment of your code to run it on a platform.
I use Grunt to run my unit tests without assert modules, i just log what i need with grunt and use my "custom" coditions to check the variables's states and values.
I was thinking about using mocha with grunt, but i am trying to find out how it will really change something when i run my tests.
How my tests will be more valuable using mocha (for example) ?
Regarding the success of these tools, i feel like i am missing something in their utility.
If someone could explain to me how and when they are usefull, it would be really great !
For starters, mocha is not an assertion library and doesn't ship any by default.
Mocha is a testing framework which lets you describe and organize your tests, using one of several available interfaces. It also provides a report for the status of the tests after they run (and sometimes while they run).
You seem to be using a way of describing and running your tests already, so unless you give more details on the capabilities of your framework/runner/reporter I can't point you to mocha's (or other frameworks/runners') advantages and/or disadvantages against it. I'd like to see some of your tests and your (test-related) grunt tasks code to better assess that.
On the other hand, assertions are just checks for conditions that must be met in your tests in order to consider them "passing". An example of an assertion library for JavaScript is chai.js. You mention using "custom" conditions (by which I don't really know what you mean), so you seem to be using some kind of asserts. The key is that if an assert happens to be false, the test from which it's evaluated must fail. If you're accomplishing that in your tests then congratulations, you're using asserts already. If you're not, then your tests are not automatic (like, for instance, if you compare your logs to their expected values manually, then you're running manual tests).
My comments above are all theoretically correct and you could probably live without a separate test framework, runner and assertion library. However, using a mature tool, which is also open source so it's maintained by a community of developers, will probably be more reliable than coding your own test description framework, test runner, test reporter and assertions library.
It will also point you in the right direction regarding time-tested conventions and best practices, since their features tend to help you adhere to them (or plain force them for your own good). For instance, mocha provides a simple yet tidy way of describing set-up and tear-down procedures for tests (and test suites), which if you don't implement yourself will make your tests have a lot of not-completely-relevant code, making them harder to understand and thus maintain.
We are working in a small team. We often had problems like developer1 did some changes in stored procedure or function and it affected work of developer2. Such issues are traced out by chance later. Please guide me how such issues can be stopped. Is there a free tool that we can run to test such issues?
Slowly introduce unit tests, focused integration tests and full system tests.
For all of those use a .net unit test framework to do it. It'll be what you do in the test what makes it be any of the above scenarios. Make sure to keep each of those 3 type of tests separately, as those will have a big difference on the speed it takes to execute them.
For the unit test framework I suggest NUnit but there are others, one that I've found interesting but never made the jump is xUnit.net.
For full system tests I suggest to run them in the unit test framework using WatiN. You could also go with Selenium RC.
We often had problems like developer1 did some changes in stored procedure or function and it affected work of developer2. Such issues are traced out by chance later.
For that specific type of scenario I strongly suggest focused integration tests. Full system tests might catch such scenario, but it will still left you to figure out why it broke.
Instead focus the test in the very specific db access code that makes the call to the procedure. By adding scenarios in there that reveal all the expectations developer2 had from said procedure when (s)he wrote the related .net code, regression issues with that integration code can be revealed very quickly and be dealt with very effectively. Also note that developer1 can easily run the focused integration tests that involve that procedure or area of the database many times / which is a lot more likely to happen than doing the same with full system tests.
You can do either automated unit testing using tools such as NUnit or automated black-box testing using tools such as Selenium. Note that both options (even with free tools) may need significant investment in terms of time and efforts. Typically, unit test cases are created by developers them selves while for automated black box testing, a separate team of QA is utilized - this is mostly because unit test cases are generally written in languages such as C#, VB.NET while automated black-box testing tools typically utilize scripting languages.
I have a flex application written using PureMVC framework.Now,I want to write tests.We are using FlashBuilder 4.Is FlexUnit sufficient for testing? Are there any issues you have faced while writing tests?
I, personally, have had problems getting the Unit testing features of Flash Builder 4 to work. I ended up creating a separate AIR Project and creating Unit Tests in that using FlexUnit 1 (I believe the swc is 0.98 or something similar). I chose that version of FlexUnit because --at the time--documentation was very sparse on FlexUnit 4 and Flash Builder integration. I suspect things have changed on the documentation, and stability front, I have not gone back to that since Flash Builder was released.
Now, I'm not sure specifically how PureMVC works, so can't comment on specifics of unit testing with PureMVC. But, I have found that unit testing works great for model classes; which are often not framework specific. You should have no problem unit testing said classes.
I have found that unit testing is not beneficial for User Interface classes. For something like that an automation testing tool such RIATest or FlexMonkey are better suited.
Does that answer the question?