Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I would like to include some code coverage into our nightly build process. We're using CruiseControl, Ant, and Buckminster. Buckminster drives checkout from multiple repositories, and the PDE building and packaging of the product.
Has any one any experience integrating code coverage into an RCP headless build?
I have been looking at Cobertura, EMMA/EclEMMA, DbUnit though am very interested to hear of any experiences with these or any other tools.
Cobertura seemed to be able to do the job for us.
Once the unit tests were running (a question all to itself), I was able to:
instrument the bundles as standalone Jars.
re-run the unit tests with a cobertura on the parent class loader class path.
The trick here is to use osgi.parentClassloader=app in the config.ini file used to run the unit test.
ext == Java extensions
boot == the boot class loader (default)
fwk == framework?
app == application, i.e. just like a normal application, with a classpath specified on the command line.
The instrumented code needed runtime access to the cobertura jar, so this last step was imperative.
EclEmma now has a additional component called "EclEmma Equinox Runtime" that provides headless code coverage analysis for any OSGi/Equinox application:
http://www.eclemma.org/devdoc/headless.html
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I use to work with Appium for mobile automation, now i want to use the same logic of Appium but for windows desktop application.
Is there any automation tool similar to Appium but to test desktop windows application?
(Other than RIDE and AutoIt) i 'm using ride with Sikuli library but i find that Is not as good as Appium which offer a many locations strategies by name,by id, by xpath ... there is no good strategy to locate elements only by image using SikuliLibrary or mouse click position using AutoItLibrary so that if i change from Windows 7 to 10 image will not respond to our scripts ....
Sikuli and Appium are two different types of application and test approaches. This is well beyond the scope of SO and I urge you to look elsewhere for that type of information.
Within the Robot Framework community a number of official/common libraries exist. These are well known and easily found. However, there are also a large group of libraries that are not found in the Python Repository but freely available on (example) GitHub. On top of this there are the plain Python modules that can be directly imported and whose methods are then usable as keywords. If your favorite application has a Python interface or module, then creating a Robot Framework Library is not difficult.
Given the specific topic of Windows Desktop Application testing with Robot Framework my first search result lead me to the Official Python Testing Tools Taxonomy Page for testing and it's GUI testing section. From this list the PyWinAuto project shows most promise as it supports windows and is open source. A Robot Framework Library robotframework-winbot exists, and still works but has not been updated in a while.
As you mentioned Appium, I've also taken a look there and although the Robot Framework Library keyword documentation doesn't seems to support Windows application, Appium itself has recently released some support for Windows Application UI Testing. This is based on the fairly new Microsoft Windows Application Driver. Python sypport is available, as there are Python examples in the Python Samples section, but no specific Robot Framework Library.
There may be other options, but I recommend you try these first and raise specific questions when you encounter issues.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We have a linux production server and a number of scripts we are writing that we want to run on it to collect data which then will be put into a Spark data lake.
My background is SQL Server / Fortran and there are very specific best practices that should be followed.
Production environments should be stable in terms of version control, both from the code point of view, but also the installed applications, operating system, etc.
Changes to code/applications/operating system should be done either in a separate environment or in a way that is controlled and can be backed out.
If a second environment exist, then the possibility of parallel execution to test system changes can be performed.
(Largely), developers are restricted from changing the production environment
In reviewing the R code, there are a number of things that I have questions on.
library(), install.packages() - I would like to exclude the possibility of installing newer versions of packages each time scripts are run?
how is it best to call R packages that are scheduled through a CRON job? There are a number of choices here.
When using RSelenium what is the most efficient way to use a gui/web browser or virtualised web browser?
In any case I would scratch any notion of updating the packages automatically. Expect the maintainers of the packages you rely on to introduce backward incompatible changes. Your code will stop working out of the blue if you auto update. Do not assume anything sacred.
Past that you need to ask yourself how much hands on is your deployment. If you're OK with manually setting up each deployment then you can probably get away using the packrat package to pull down and keep sources of the exact versions you are using. This way reproducing your deployment is painful, but at least possible. If you want fully automated reproducible deployments I suggest you start building docker images with your packages and tagging them with dates or versions.
If you make no provisions for reproducing your environment you are asking for trouble, while it may seem OK at first to simply fix any incompatibilities as they come up with updates, and does indeed seem to be the official workflow from the powers that be, however misguided that is; eventually as your codebase grows that will be all you will end up doing.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm currently trying to implement a deployment process (I think that’s what you call it?)
A former Company I worked for used to have 3 Environments and were using some form of DevOps.
dev.url => Development for Devs
stage.url => Staging for QA
(live.)url
Finished Features would be pulled on stage for QA. When QA gave the go it got a Tag and that tag then was pulled on the live environment. It was all in combination with Agile.
So my Question would be:
Do you know the name of that Deployment-Process ?/
Do you know further popular Deployment-Processes similar to the one I just described ? or What kind of process do you use ?
I'm looking for something like:
Development process, deployment, GitHub
Thanks
Looks like what you are looking for is called as Deployment Pipeline which as mentioned by #prasanna is the key part of Continuous Delivery. Key for Continuous Delivery is Continuous Integration [which in turn requires automated tests] and automated deployment with Configuration Management tools.
Regarding the tool, you can use Jenkins along with its Build Pipeline Plugin.
Of-course this is continuos delivery. But the devil is in the details.
What do you do when things move from Dev->QA->Staging->Prod
What tests are run when the build is across these stages
How does the promotion between environments happen (automated/manual) etc.
The key in CD is to ensure that you try to automate all these as deep as possible to you can take faster decisions when builds get stuck in any of these environments.
As rightly mentioned in the above two answers you are referring to Continuos delivery. Now there can be multiple levels of maturity in Continuos delivery. You start with having a Continuos Integration process in place which essentially means that the code is compiled frequently to check for possible failures.
Then you put some checks on the compiled code which get triggered automatically.
Then you go ahead and deploy this code.
The next step to this would be where the environment where the code is deployed is also provisioned on the fly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What coding tools do you use for improving effectivness of programming in FLEX?
I found Tr.ace() AS3 Debugging Utility which help me a lot with working in a team of programmers?
frameworks like RobotLegs and others.
Signals.
What are the others?
Chris
I guess it depends how you define tools. These are some things used by me, or others I've spoken to:
ServiceCapture: This is a great program to see the packets being sent to and from the browser / Flash Player and a remote server. Charles is another common tool for this purpose; and Flash Builder now has a network monitor built in to perform the same task. ServiceCapture will also show the trace statements that your app puts out.
Step Through Debugger: The Step Through Debugger is a fantastic tool for stepping through code to figure out what happens. It's built nicely into Flash Builder; but there is a command line tool too. I assume that other Flex IDEs support this functionality.
Flash Builder: You can write code in a text editor and compile it via command line tools; but an IDE helps tremendously. Flash Buyilder is Adobe's IDE; and the one I use primarily, but others exist such as IntelliJ or FDT.
ANT: ANT is build tool that allows you to do a bunch of tasks automatically; such as compiling and automatically uploading to a server. Maven and CruiseControl are two alternate options I've heard about. I think both are much more advanced than ANT.
Subversion: Subversion is a version control system that allows you to track changes with your code. It is strongly recommended for any project; but has extra special benefits when it comes to projects with multiple people working on them. Other options are Git, and CVS.
I would like to add some thing with "www.Flextras.com" answer.
Profiling the application
Profiling an application can help you in understanding the following:
Call Frequency
Method Duration
Call Stacks
Number of Instances and their sizes at any givin point of time
Garbage collection and Loitering Objects
for More info refer the link
http://livedocs.adobe.com/flex/3/html/help.html?content=profiler_1.html
since the profiling and network monitor available to premium Flash builders.
~~~~~~~Happy Coding~~~~~~~~~
There is also
Flexformatter: This is a great plugin for Flash Builder that helps you clean up Actionscript/MXML code.
http://sourceforge.net/apps/mediawiki/flexformatter/index.php?title=Main_Page
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I heard google has some automated process like that:
When you check in, your code is checked into a temporary location.
It is built.
Style checks run.
Tests run.
If there are no problems, code goes to actual repository.
You receive an e - mail containing test results, performance graphs, style check results and whether your code is checked in.
So if you want to learn if you broke something or the great performance gain you expected occurred, you just check in and receive an e - mail telling you what you need to know.
What are your favorite build server best practices?
What you described for google is what every basic build process does. Specific projects may have additional needs, for example - how we deploy web applications from staging to production:
Build start
Live site is taken offline (Apache redirects to different directory holding an "Under construction" page)
SVN update is ran for production server
Database schema deltas are ran
Tests are ran against updated source and schema
In case of fail: rollback is ran (SVN revert and database schema UNDO)
Site gets back online
Build ends
On the java platform I have tried every single major CI system there is. My tip is that paying for a commercially supported solution has been the cheapest build system I've ever seen. These things take time to maintain, support and troubleshoot. Especially with a heavy load of builds running all the time.
The example workflow you give is similar to the one proposed by TeamCity. The idea being:
Code
Check in to "pre-test"
CI server tests the "pre-commit"
If (and only if) tests pass, the CI server commits the code change to the main repo
It's a religious war but I prefer:
Code - Test - Refactor (loop)
Commit
CI server also validates your commit
Every responsible programmer should run all the tests before committing.
The main argument for the first way is that it gurantees that there is no broken code in SCM. However, I would argue that:
You should trust your developers to test before committing
If tests take to long, the problem is your slow tests, not the workflow
Developers are keen to keep tests fast
Relying on CI server to run tests give you false sense of security