I have a lot of tests that expose REST API using requests library.
All tests take a lot of time to finish.
And I want to make them asynchronous, but I don't such experience.
Can someone help me with giving some advice or articles where I can find information about it? Or maybe you know another approach.
P.S: using xdist pytest plugin does not work for me, it does not start running my tests in parallel due to some reason that I don't know, not only me faced with this issue.
I am working on the Maximo anywhere 7.5.2.0 version. I could seen lot of log trace on console while login successful. This always taken some time for downloading data.
I wanted to be tuning the performance.
And also wanted to be stoped the log trace while production. Can I do the code optimazation for performace tuning. If yes, Kindly suggest where I can do it.
Thanks in advance.
If you change the logLevel="0" in the app.xml it will disable logging. In future releases, we've done a better job of disabling the logging even more at this lower log level. The latest iFix for 7521 should have this fix.
I am developing a small client application for monitoring XenServer using XenAPI provided by citrix. I am able to get all the values(cpu,n/w read,n/w write, diskread,diskwrite...) but facing the below issue.
Can anybody please help me out in getting the memory (total,free,used) usage for the VM's present in the Xenserver using XenAPI. I tried the above by using VM_guest_metrics api call of VM, but its giving me the empty results. Please help me in this regard.
I have taken SDK(XenAPI) from the below link
http://community.citrix.com/display/xs/Download+SDKs
Thanks in Advance for your help.
The recommended way to get the data is to use the XAPI Round Robin Database (RRD) that comes with XAPI.
http://wiki.xen.org/wiki/XAPI_RRDs
See also the tutorials from Xen Day:
http://wiki.xen.org/wiki/Creating_a_LVM_backed_XFS_SR
In particular, the "Nuts and Bolts" session by Steven Maresca.
See also the code in OpenXenManager:
http://sourceforge.net/projects/openxenmanager/ as it is an open source clone of Citrix XenCenter and has performance graphs using XAPI.
We have a C#/ASP .Net web application that is built and deployed by the build server (Jenkins). One of the build steps before the automated deployment is ensuring that all automated tests pass -- including functional tests we have using Selenium 2 WebDriver and NUnit.
The problem: Sometimes these tests fail randomly. They will succeed for 100 builds and then one just fails. They fail for various reasons -- a .Click() event is just ignored, element can't be found, IE has a bad day, etc. We have an AJAX heavy web app and so we depend heavily on WebDriverWaits but we always take this into account while writing tests, and like I said the tests do pass most of the time.
What are some ways to avoid or fix this problem? A couple that came to my mind:
Accept a certain number of failures (seems like a bad idea)
Rerun test failures?
I don't like either of the suggestions that you mention, but I admit to having used them occasionally. The best thing to do is to make sure that when there is a seemingly "random" failure to do everything you can to get all of the data about why it really failed. Was it an environment issue? Did some other process on the machine interfere with the tests? Was it a timing issue that only appears when the site loads excruciatingly slow, or blazing fast?
One thing that you might try is soak testing your automated tests. Run each one 100+ times on the same build and same environment (so you can rule those out as potential failure points) and find the ones that fail occasionally. See if they fail in the same place or in different places. Generally, when you go through this exercise you'll find some tests that really are a little bit flaky and you can remove them from the daily run until they are fixed. You could even include a soak as a check-in criteria for any automated test case.
Another useful thing I have found that helped me get to the bottom of some of the seemingly random failures was taking screenshots on failure. Often you can see that other windows or dialogs were popped up causing the browsers not to be able to be in the forefront, etc.
Of the two, I would prefer to rerun test failures, or rather, on test failure, retry the tests.
If you accept a certain number of test failures, then you get into problems about which tests are allowed to fail. You would have to have two sets of tests, some which are allowed to fail, some which are not.
For rerunning, I'm no expert on testing with NUnit, but you could have the tests themselves manage the retry. In JUnit, you can introduce a rule so that if a test fails, it would retry a maximum of 3 times. This would probably avoid most of the problems you're having. I don't know how to do this in NUnit, but see my answer to How to Re-run failed JUnit tests immediately?. This will give you the general idea.
We have 4 different environments:
Staging
Dev
User Acceptance
Live
We use TFS, pull down the latest code and code away.
When they finish a feature, the developers individually upload their changes to Staging. If the site is stable (determined by really loose testing), we upload changes to Dev, then UserAcceptance and then live.
We are not using builds/tags in our source control at all.
What should I tell management? They don't seem to think there is an issue as far as I can tell.
If it would be good for you, you could become the Continuous Integration champion of your company. You could do some research on a good process for CI with TFS, write up a proposed solution, evangelize it to your fellow developers and direct managers, revise it with their input and pitch it to management. Or you could just sit there and do nothing.
I've been in management for a long time. I always appreciate someone who identifies an issue and proposes a well thought-out solution.
Whose management? And how far removed are they from you?
I.e. If you are just a pleb developer and your managers are the senior developers then find another job. If you are a Senior developer and your managers are the CIO types, i.e. actually running the business... then it is your job to change it.
Tell them that if you were using a key feature of very expensive software they spent a lot of money on, it would be trivial to tell what code got pushed out when. That would mean in the event of a subtle bug getting introduced that gets passed user acceptance testing, it would be a matter of diffing the two versions to figure out what changed.
One of the most important parts of using TAGS is so you can rollback to a specific point in time. Think of it as an image backup. If something bad gets deployed you can safely assume you can "roll" back to a previous working version.
Also, developers can quickly grab a TAG (dev, prod or whatever) and deploy to their development PC...a feature I use all the time to debug production problems.
So you need someone to tell the other developers that they must label their code every time a build is done and increment a version counter. Why can't you do that?
You also need to tell management that you believe the level of testing done is not sufficient. This is not a unique problem for an organisation and they'll probably say they already know. No harm in mentioning it though rather than waiting for a major problem to arrive.
As far as individuals doing builds or automated build processes this depends on whether you really need this based on how many developers there are and how often you do builds.
What is the problem? As you said, you can't tell if management see the problem. Perhaps they don't! Tell them what you see as the current problem and what you would recommend to fix the problem. The problem has to of the nature of "our current process has failed 3 out of 10 times and implementing this new process would reduce those failures to 1 out of 10 times".
Management needs to see improvements in terms of: reduced costs, icreased profits, reduced time, reduced use of resources. "Because it's widely used best practice" isn't going to be enough. Neither is, "because it makes my job easier".
Management often isn't aware of a problem because everyone is too afraid to say anything or assumes they can't possibly fail to see the problem. But your world is a different world than theirs.
I see at least two big problems:
1) Developers loading changes up themselves. All changes should come from source control. Do you encounter times where someone made a change that went to production but never got into source control and then was accidentally removed on the next deploy? How much time (money) was spent trying to figure out what went wrong there?
2) Lack of a clear promotion model. It seems like you guys are moving changes between environments rather than "builds". The key distinction is that if two changes work great in UAT because of how they interact, if only one change is promoted to production it could break there. Promoting consistent code - whether by labeling it or by just zipping up the whole web application and promoting the zip file - should cause fewer problems.
I work on the continuous integration and deployment solution, AnthillPro. How we address this with TFS is to retrieve the new code from TFS based on a date-time stamp (of when someone pressed the "Deliver to Stage" button).
This gives you most (all?) the traceability you would have of using tags, without actually having to go around tagging things. The system just records the time stamp, and every push of the code through the testing environments is tied to a known snapshot of code. We also have customers who lay down tags as part of the build process. As the first poster mentioned - CI is a good thing - less work, more traceability.
If you already have TFS, then you are almost there.
The place I'm at was using TFS for source control only. We have a similar setup with Dev/Stage/Prod. I took it upon myself to get a build server installed. Once that was done I added in the ability to auto deploy to dev for one of my projects and told a couple of the other guys about it. Initially the reception was luke warm.
Later I added TFS Deployer to the mix and have it set to auto deploy the good dev build to stage.
During this time the main group of developers were constantly fighting the "Did you get latest before deploying to Stage or Production?" questions; my stuff was working without a hitch. Believe me, management and the other devs noticed.
Now (6 months into it), we have a written rule that you aren't even allowed to use the Publish command in visual studio. EVERYTHING goes through the CI build and deployments. When moving to prod, our production group pulls the appropriate copy off of the build server. I even trained our QA group on how to do web testing and we're slowly integrating automated tests into the whole shebang.
The point of this ramble is that it took awhile. But more importantly it only happened because I was willing to just run with it and show results.
I suggest you do the same. Start using it, then show the benefits to get everyone else on board.