I am currently creating a test script using codeception webdriver, I decided to run this script automatically every after a specified interval to know whether or not my application is still working. With its capability on generating error logs and screenshots this automated testing tool has been helpful. But I personally think that getting response headers would be more helpful. Is there anyway I could do this without changing the frameworks I have been using? If so, could you guide me on how to implement this?
Related
I have reportportal installation running on Windows box. I am planning to use it as dashboard to look at unit test and other automated test results. I understand reportportal integration with unit test frameworks is done at the logger level so that the test app itself can send results back to dashboard.
I have a scenario where the test application is an exe that I want to launch by sending a command from dashboard to system under test.
Are there any provisions for doing it?
Do I have to build an agent that talks to reportportal using its api for this?
Thanks
No, nothing similar at the moment.
It is pretty popular request, so we have it in backlog, but still focus on test reports aggregations first. And the other types of functionality will come later.
We have a POC of running tests on the Firebase devices cloud and in the last few days I see weirdly formatted test_result_1.xml files that are generated after the test is concluded.
It used to be formatted as a JUnit XML file but now the content wildly differs from run to run. Sometimes, it's empty and sometimes it includes a content of an error that happened in one of our tests.
Did anybody encounter such behaviour? I can't seem to find a way to contact their support for assistance with this.
Thanks!
Try posting this to the #test-lab channel in the Firebase Slack forums. https://firebase.community/
I've run into an interesting and frustrating wrinkle in attempting to run some functional tests on a Symfony2 project I inherited. I'm sending a POST request to one of my controller methods via Symfony's test client. The debugger works during the test on the test file itself. I can set a break point, run the test, and the debugger will stop all processing at that point until I step through.
The problem is that the debugger does not work in the code that's accessed by the client request, likely because it is a secondary request/session.
Is there a way around this? I'm using PhpStorm 7.1.3, if that matters.
Solved it with the help of this post and supplying the Symfony test client with the right URI.
I am pretty newbie by this question.
I want to know about possibilities of how to manipulate Jmeter through the console (bash or cmd).
My goal for a start consists in understanding of how to run my testplan.jmx for several URLS. For this I add "server" and "port" parameters into my testplan.
How could I can change these parameters through the console and then run Jmeter ?
Morover, I want to ask you guys to suggest any free online tutorials where I can learn more about "Jmeter in non gui mode" and possibilities for integration Jmeter between different frameworks to use for automated testing.
Thank you very much indeed.
See:
http://jmeter.512774.n5.nabble.com/How-to-Run-Jmeter-in-command-line-td2640725.html
You can launch your test plan from the command line, specifying parameters, like:
jmeter -n -t plan.jmx -Jmy_url=http://www.firsturl.com
Inside your testplan you'd reference that command line param as ${__P(my_url)}
In terms of capturing results when running in non-gui mode, you may want to see:
http://blogs.amd.com/developer/2009/03/31/using-apache-jmeter-in-non-gui-mode/
Personally, my experience is with using the GUI and writing and running test plans that way but this seems workable.
I wrote an automated test using dijit robot - but in order to be able to use relative paths within our web application, I created an OSGi service for our tests and put the test code in a velocity template. When I try to run the tests, nothing happens. If I use the same script in an html file and access it directly from windows explorer (not via localhost), it works fine. I find that there are many cases that will make the dijit robot tests just not run - has anyone ran into this and found out all the little gotchas to make dijit tests run?
Check out dijit.initRobot(), that might take care of some things for you.
One thing that was screwing up a lot of my tests is described in this blog post - basically the robot was not initializing because I was obscuring a special div that the robot clicks to initialize.
However I have realized that there are still quite a few problems with the doh robot - it just seems very fragile. Often I will have a working test, then add 1 robot command and the test will break. When I remove the line and try it again...well the robot wont run even though it is the exact same code as before.
I've found the best thing to do when writing robot code is to just clear the cache every time and cross your fingers. Good luck.
Problem can be if you trying it with openjdk, run it on oracle java version