How to model branches/GitFlow in CA Harvest? - harvest-scm

I've spent a day to read Harvest docs here: https://docops.ca.com/ca-harvest-scm/13-0/en/using/manage-changes-in-the-repository-and-workspace
I feel this tool is designed for mainframe only:
1. Promoting packages from lower stage to higher are like promote code from lower environments to higher in mainframe.
2. Code is re-compiled in each stage like mainframe.
Seems its branch is at the item/package level, doesn't like a branch in modern SCM tool. How can I model a normal branch strategy like GitFlow in this tool? If I can't create branches, than how can I support parallel development?

Harvest is not for mainframe development.It is meant for distributed systems.
I will explain methods of parallel development when using one project and multiple projects
Single project :
package is the smallest unit of change.
Multiple branches can be created from the same base version and can be assigned to two developers with each developer owing a bracnh
FIle1.java -BASE version
File1.java - o.1.1 -package 1 - developer 1
File1.java - 0.2.1 -package 2 - developer 2
When both of them complete their changes ,they can merge back their changes to the package using a process called concurrent merge
If conflicts exist ,merge engineer will step into and resolve the merge activity
Across Projects:
---------------
Say for example ,two projects exist
proj1
proj2
A snapshot can be taken from the source code baselined to proj1
This snapshot can be baselined to proj2.
proj1 and proj2 work can continue in parallel development mode
and when one of the project is completed earlier and you would want to merge the changes to proj2 ,you can do using a process called Cross Project Merge.
This is a short explanation
If you need more detailed explanation,please reach out to CA Support team .
Regards,
Balakrishna.

The only thing I can think of is using a sibling project to model the branch.

Related

What does Production Phase ,Development Phase and deployment means in Visual studio 2010

I am not looking for Definitions for all these terms but interested in knowing "How Developers work when they are developing some software in ASP.net"
My senior has divided the project in 2 folders .first one is Development:D second is Subversion:S folder.Both folders contains same files.
While programming(development)on which folder should I work.( i mean on which file should I open to start programming : D folder or S folder ).
We are using Subversion and visual studio 2010.
If any one could explain me "Deployment scenario " then I would be very thankful to them.
Thanks.
We use SVN at our site (TortoiseSVN as client).
Each project has its own repository with three types, a Trunk, a Releases branch, and Branches. Trunk is a work in progress, Releases are snapshots of the source for major releases, and a new Branch gets created for a particular task. Upon completion a branch will get merged back into Trunk, and eventually a release gets created from Trunk.
Development phase and Production phase seem confusing, but I'm guessing development means code in the process of being developed, and production means code that has been released into the production environment. You generally don't want to do development in the production environment, you should start in development phase and once the functionality is thoroughly tested, promote it to production.

MsTest noob - how to set up testing infrastructure the right way

We are a MSFT shop with a far-reaching MSDN license.
After many years of doing things wrong, we finally have to start doing automated testing.
My group is the Guinea pigs at this. We need to create what was not there before. We looked at the multitude of options out there. Some people get by just fine with open-source alternatives such as CC.Net, Bamboo, MbUnit, etc. We want to give MsTest, CodedUI, Team Build a good try ... might as well because of MSDN licensing and MSFT focus.
The plus and minus of doing things the MSFT way is that MSFT makes monolithic things. You have got to install various tools that play with each other nicely, but with outsiders - not necessarily. The plus is that when things are done correctly, it should all function rather smoothly. There is the option of gated check-ins, of using TFS to store the reports, etc.
Frankly, I am confused by all of the options. Our traditional build system was hacked together with a bunch of perl, batch scripts, executables, but now the build team switched to Team Build, which ought to be cleaner, but for the most part it is just a wrapper to the same old perl crap.
I am inclined to hack things together for testing too, because I can at least see what the pieces are. So, I envision the poor man's version as:
* A dedicated fast computer to run tests
* Some script to copy build files (test code as well as product code) over to that computer.
* A batch/perl script which would run mstest.exe from command line and execute a few test batches on some by-category filter within some test dlls (the product is so huge, that we do want to organize tests by various categories).
* Some script which will invoke the latter script remotely from the build server using psexec.exe (http://technet.microsoft.com/en-us/sysinternals/bb897553), as well as grabbing the xml output from a shared drive, and then sending out an email with results to those who are interested.
This can probably work, but then I have to worry about how well error handling can work with so many potential points of failure. It would be nice to configure things the "right way", taking advantage of whatever MSFT has cooked up. I am just not sure where to turn for a good guide. Have you done something like this?
Eventually we will want to have a farm of test computers, if we are to run out of the allotted time. Something else of concern is - for coded ui tests to succeed, I think a user has to be logged in, so I am not sure if psexec will be of much help here.
Can you share your positive/negative experience, point me to a good guide perhaps? Thanks!
Here are some tips off the top of my head if you want to get started with testing using the MS tools:
If you have an MSDN subscription, install a Test Rig by installing the Test Controller on your network and the the Test Agent service on each of the machines that will be collecting diagnostic data. See the following link for reference: http://msdn.microsoft.com/en-us/library/dd293551.aspx.
Add a Test Project to your solution. See the first part of the following blog post: http://blogs.microsoft.co.il/blogs/eranruso/archive/2010/03/27/visual-studio-2010-coded-ui-test-user-guide-create-a-simple-coded-ui-test.aspx.
Automated test options can be configured through the .testsettings file(s) that are added automatically when you add a Test project (you can also manually add these files to your solution).
Install Team Foundation Server (2010 recommended) in order to take advantage automating your tests with a daily build. You will also need TFS 2010 if you want to use the VS2010 Test Manager tool to define test environments and plan manual tests (these can be fully automated with CodedUI). Customize your new automated build to setup / deploy your application after build and set the build to run tests. Deployment will likely not be necessary for unit tests, but they will be for Web Performance and CodedUI test types.
If you have VS Ultimate or Test Professional licenses, you can also go further and set up virtual test labs using "Lab Management" features.

Dev and deploy management with SVN of a Web Site

Net solution for a website, consisting of 5 projects, and there are a few(less than 10) developers working on the solution. We deploy almost on a daily basis.
The question is, how to setup the SVN repo to support this scenario (the daily deploy), also mentioning that not every commited file should go to production, there is a QA check before deploying.
Try out TeamCity
(CI tool) as its free for smaller amounts of CI. this may be better for you than CruiseControl.Net as CCNET is very configuration heavy as its all done via XML. TeamCity uses wizards to create the scripts to manage releases
if you need any other help on CI then let me know as its something I am evangelistic about.
What you want to do is commonly referred to as Continuous integration (CI).
While you can do that using Subversion, it is probably not the right tool for the job.
There is special CI software, which will allow you to easily automate the necessary tasks (checkout from version control, compiling / building, running automatic tests, deployment etc). An example would be CruiseControl.NET.
As to "not every commited file should go to production", the common solution is to have a special "release" branch, which gets deployed. Only tested code is merged there (or have the trunk always be stable, otherwise same model). Of course, you can also (better: additionally) have tests before your automatic deployment, and only deploy if all tests pass.
Working with a release branch
In practice, this means that people check in their code as they produce it. Sometimes this code will work, sometimes not. When the release time draws nearer, a "release branch" is created in Subversion. This release branch is then effectively a frozen snapshot of the source as it was at the time of branching. Now this branch can be used to compile & deploy the application, which can then be tested.
No new code is checked into the branch (but checkins can continue elsewhere). Only if a bug is detected in the branch, will there be a checkin into the branch to fix it. This continues until the branch passes all tests. Then the branch can be released as a new version of the software; afterwards the branch will only be used if the released version needs to be patched.
Of course, any bugfixes checked into the branch need to also be put into the trunk (either by merging branch -> trunk, for which Subversion provides special support, or by reimplementing the fix in the trunk, as appropriate).

What alternatives exist for running QTP tests in batch?

We are in the process of implementing automated regression testing for our applications, and are looking for a solid batch-testing utility. We have QuickTest Professional 10.0, and it comes bundled with 'Test Batch Runner' which appears to be deprecated. It appears in previous versions there was 'Multi-Test Manager', which has been discontinued as well.
What alternatives exist, if any?
The canonical way to do this is via Quality Center, if you don't have QC you can use QTP's automation model from a vbs file. The documentation for this is available in Start -> Programs -> QuickTest Professional -> Documentation -> Automation Object Model Reference
QTP 10 works excellent with Multi Test Manager V8.2.4.
We use it for our project (previously used it with QTP 9.2).
Try google for an installation (of you don't have one), it should be free but just not supported by HP anymore.
From WinRunner times I very extensively used Test Driver scripts with great success due to the following benefits:
non-programming testers can easily create/maintain batches as their stored in XML format
test input files are externally configurable through mapping
a variety of customization parameters supported, from login credentials to prefixes and switches
test dependencies could be established so that if critical test cases is failed the whole branch of dependant test cases is skipped.
Now I continue using Test Drivers and introducing them to clients.
And Test Driver approach was integrated not only by the client companies that do not use Quality Center. Some others followed it because it gives much more flexibility and robustness in automated test plan execution.
Thank you,
Albert Gareev
http://automationbeyond.wordpress.com
I echo Motti...if i get your question right ....You can see the below written link as well..
Work with Test Batch runner

What artifacts to save for a nightly build?

Assume that I set up an automatic nightly build. What artifacts of the build should I save?
For example:
Input source code
output binaries
Also, how long should I save them, and where?
Do your answers change if I do Continuous Integration?
You shouldn't save anything for the sake of saving it. you should save it because you need it (i.e., QA uses nightly builds to test). At which point, "how long to save it" becomes however long QA wants them.
i wouldn't "save" source code so much as tag/label it. I don't know what source control you're using, but tagging is trivial (performance & disk space) for any quality source control system. Once your build is tagged, unless you need binaries, there really isn't any benefit to just having them around because you can simply re-compile when necessary from source.
Most CI tools let you tag on each successful build. This can become problematic for some systems as you can easily have 100+ tags a day. For such cases I recommend still running a nightly build and only tagging that.
Here are some artifacts/information that I'm used to keep at each build:
The tag name of the snapshot you are building (tag and do a clean checkout before you build)
The build scripts themselfs or their version number (if you treat them as a separate project with its own version control)
The output of the build script: logs and final product
A snapshot of your environment:
compiler version
build tool version
libraries and dll/libs versions
database version (client & server)
ide version
script interpreter version
OS version
source control version (client and server)
versions of other tools used in the process and everything else that might influence the content of your build products. I usually do this with a script that queries all this information and logs it to a text file that should be stored with the other build artifacts.
Ask yourself this question: "if something destroys entirely my build/development environment what information would I need to create a new one so I can redo my build #6547 and end up with the exact same result I got the first time?"
Your answer is what you should keep at each build and it will be a subset or superset of the things I already mentioned.
You can store everything in your SCM (I'd recommend a separate repository), but in this case your question on how long you should keep the items looses sense. Or you should store it to zipped folders or burn a cd/dvd with the build result and artifacts. Whatever you choose, have a backup copy.
You should store them as long as you might need them. How long, will depend on your development team pace and your release cycle.
And no, I don't think it changes if you do continous integration.
This isn't a direct answer to your question, but don't forget to version control the nightly build setup itself. When the project structure changes, you may have to change the build process, which will break older builds from that point on.
In addition to the binaries as everyone else has mentioned I would recomend setting up a symbol server and a source server and making sure you get the correct information out and into those. It will aid in debugging tremendously.
We save the binaries, stripped and unstripped (so we have the exactly same binary, once with and once without debug symbols). Further we build everything twice, once with debug output enabled and once without (again, stripped and unstripped, so every build result in 4 binaries). The build is stored to a directory according to SVN revision number. That way we can always retain the source from the SVN repository by simply checking out this very revision (that way the source is archived as well).
A surprising one I learned about recently: If you're in an environment that might be audited you'll want to save all the output of your build, the script output, the compiler output, etc.
That's the only way you can verify your compiler settings, build steps, etc.
Also, how long to save them for, and where to save them?
Save them until you know that build won't be going to production, iow as long as you have the compiled bits around.
One logical place to save them is your SCM system. Another option is to use a tool that will automatically save them for you, like AnthillPro and its ilk.
We're doing something close to "embedded" development here, and I can tell you what we save:
the SVN revision number and timestamp, as well as the machine it was built on and by whom (also burned into the build binaries)
a full build log, showing whether it was a full/incremental build, any interesting (STDERR) output the data baking tools produced, a list of files compiled and any compiler warnings (this compresses very well, being text)
the actual binaries (for anywhere from 1-8 build configurations)
files produced as a side effect of linking: a linker command file, address map, and a sort of "manifest" file indicating what was burned into the final binaries (CRC and size for each), as well as the debugging database (.pdb equivalent)
We also mail out the result of running some tools over the "side-effect" files to interested users. We don't actually archive these since we can reproduce them later, but these reports include:
total and delta of filesystem size, broken down by file type and/or directory
total and delta of code section sizes (.text, .data, .rodata, .bss, .sinit, etc)
When we have unit tests or functional tests (e.g. smoke tests) running, those results show up in the build log.
We've not thrown out anything yet -- given, our target builds usually end up at ~16 or 32 MiB per configuration, and they're fairly compressible.
We do keep uncompressed copies of the binaries around for 1 week for ease of access; after that we keep only the lightly compressed version. About once a month we have a script that extracts each .zip that the build process produces and 7-zips a whole month of build outputs together (which takes advantage of only having small differences per build).
An average day might have a dozen or two builds per project... The buildserver wakes up about every 5 minutes to check for relevant differences and builds. A full .7z on a large very active project for one month might be 7-10GiB, but it's certainly affordable.
For the most part, we've been able to diagnose everything this way. Occasionally there's a hiccup on the buildsystem and a file isn't actually a the revision it's supposed to be when a build happens, but there's usually enough evidence of this in the logs. Sometimes we have to dig out a tool that understands the debugging database format and feed it a few addresses to diagnose a crash (we have automatic stackdumps built into the product). But usually all the information needed is there.
We haven't had to crack the .7z archives yet, to mention. But we have the info there, and I have some interesting ideas on how to mine bits of useful data from it.
Save what can't be reproduced easily. I work on FPGAs where only the FPGA team have the tools and some cores (libraries) of the design are licensed to compile on only one machine. So we save the output bitstreams. But try to check them over one another rather than with a date/time/version stamp.
Save as in check in to source code control or just on disk? Save nothing to source code control. All derived files should be visible in the file system and available to developers. Don't checkin binaries, code generated from XML files, message digests etc. A separate packaging step will make these end products available. As you have the change number you can always reproduce the build if necessary assuming of course everything you need to do a build is completely in the tree and is available to all builds by syncing.
I would save your built binaries for exactly as long as they have a chance to go into production or be used by some other team (like a QA group). Once something has left production, what you do with it can vary a lot. For a lot of teams, they'll keep just their most recent prior build around (for rollback) and otherwise discard their builds.
Others have regulatory requirements to keep anything that went into production around for as long as seven years (banks). If you are a product company, I'd keep around any binary a customer might have installed in case a tech support guy wants to install the same version.

Resources