sync defects from QC - hp-quality-center

our QA team use QC to manage defects.
our DEV team use VS2010,TFS2010(for source control only), SharePoint.
the QA team is behind a private network with no connection to DEV network.
what is the best (simple and cheap) way to sync just the defects between the 2 teams?

HP ALM Synchronizer is a tool provided by HP for defect and requirement synchronization between TFS and QC 10 / ALM 11.
It's free and and relatively simple once you read the manual. You can try and skip manual reading, but I don't recommend it. Both QC and TFS are complicated products, and as such sync between the two is some what complicated as well.

Related

Proper DTAP setup for Content Delivery

I've had this setup, but it didn't seem quite right.
How would you improve Content Delivery (CD) development across multiple .NET (customer) development teams?
CMS Server -> Presentation Server Environments
CMS Production -> Live and Preview websites
CMS Combined Test + Acceptance (internally called "Staging") -> Live ("Staging")
CMS Development (DEV) -> Live (Dev website) and sometimes Developer local machines (laptops)
Expectations and restrictions:
Multiple teams and multiple websites
Single DEV CMS license (typical for customers, I believe?)
Enough CD licenses for each developer
Preferably developer could program and run changes locally--was this a reasonable expectation?
Worked
We developed ASP.NET pages using the Content Delivery API against the same broker database for local machines and CD DEV. Local machines had CD dlls, their own license files, and ran/debug fine with queries and component presentation calls.
Bad
We occasionally published to both the Dev presentation server and Developer machines which doesn't seem right now, but I think it was to get schema files on our local machines. But yes, we didn't trust the Dev broker database.
Problematic:
Local machines sometimes needed Tridion-published pages but we couldn't reliably publish to local machines:
Setting multiple publication destinations for a single "Local Machine" publication target wouldn't work--we'd often take these "servers" home.
VPN blocked access to laptops offsite (used "incoming" folder at the time).
Managing publication targets for each developer and setting up CD for each new laptop was good practice (as in exercise, not necessarily as a good idea) but just a little tedious.
Would these hindsight approaches apply?
Synchronize physical files from Dev to local machines on our own?
Don't run presentation sites locally (localhost) but rather build, upload dll, and test from Dev?
We were simply missing a fourth CMS environment? As much as we liked our Sales Guy, we weren't interested in purchasing another CM license.
How could you better setup .NET CD for several developers in an organization?
Edit: #DominicCronin pointed out this is only a subset of a proper DTAP setup. I updated my terms and created a separate question to clarify DTAP with Tridion.
The answer to this one is heavily depending on the publish model you choose.
When using a dynamic model with a framework like DD4T you will suffice with just a single dev environment. There is one CMS, and one CD server in that environment and everything is published to a broker database. The CD environment could be used as an auto build system, the developers purely work locally on a localhost website (which gets the data from the dev broker database), and their changes are checked in an VCS (based on which the auto build could be done).
This solution can do with only a single CMS because there is hardly any code developed on the CMS side (templates are standardized and all work is done on the CD side).
It gets more complex if you are using a static or broker publishing model. Then I think the solution is to split Dev up in Unit-Dev and Dev indeed as indicated by Nuno and Chris.
This solution requires coding on both the CMS and CD side, so every developer has a huge benefit in having its own local CMS and CD env.
Talk to your Tridion account manager and agree a license package that suits the development model you want to have. Of course, they want to maximise their income, but the various things that get counted are all really meant to ensure that big customers pay accordingly, and smaller customers get something they can afford at a price that reflects the benefits they get. In fact, setting up a well-thought-out development street with a focus on quality is the very thing that will ensure good customer satisfaction and a long-running engagement.
OK - so the account managers still have internal rules to follow, but they also have a fair amount of autonomy in coming to a sensible deal with a customer. I'm not saying this will always work, but its way better than blindly assuming that they are going to insist on counting every server the same way.
On the technical side - sure, try to have local developer setups and a common master dev server a-la Chris's 5th. These days, your common dev environment should probably be seen as a build/integration server: the first place where the team guarantees all the tests will run.
Requirements for CM and CD development aren't very different, although you may be able to publish to multiple developer targets from one CM if there's not much CM development going on. (This is somewhat true of MVC-ish approaches, but it's no silver bullet.)

Synchronize defects and requirements between TFS and HP QC

We use TFS 2010 for (development and requirements) and HP Quality Center for Testing and defects. We currently use Juvander TFS Bug Synchronizer for synchronizing defects and requirements between TFS 2010 and HP Quality Center 10.00.
The problem with Juvander is that it gets slow as the number of projects increase.
I am asked to investigate alternative tools to sync between TFS and HP QC.
I have looked into the HP QC Synchronizer. But it cannot sync requirements between TFS and HP Quality Center.
I want to know if anyone uses any such synchronizers. Any help is appreciated.
Thanks in advance
Regards
please look at the use case given below. As far as I understand this is what you are trying to achieve:
The Product Manager creates a ‘requirement’ in TFS and attaches a screenshot that includes communication details from the customer.
The development team receives the ‘requirements’ and starts work on it.
The ‘requirement’ also synchronizes to HPQC.
The QA team creates ‘test cases’ against the ‘requirement’ and link the ‘test cases’ to the ‘requirement’.
Once the development team completes work on the ‘requirement’, it changes the status of ‘requirements’ in TFS to ‘closed’.
The QA team runs the ‘test case’ against the closed ‘requirement’. If the ‘test cases’ passes, the QA team changes the status of ‘requirements’ in HPQC to complete, which automatically updates the status of ‘requirements’ in TFS. If the ‘test cases’ fails, the QA team analyzes the issue and logs a ‘defect’ in HPQC, which reopens the status of ‘requirements’ in TFS
If my assumption is right, please checkout this datasheet that talks about TFS-HPQC integration using an integration solution, OIM, in detail.

QC OTAClient dll

We have developed a QC Adapter for one of the clients using OTA API by referencing OTACLient.dll
Is this DLL redistributable?
TDAPIOLELib.dll can be generated using TlbImp.exe OTAClient.dll (with the correct paths), so IMHO any redistribution concerns should be about OTAClient.dll.
OTOH OTAClient.dll it's part of the HP ALM Client Registration (from the add-in page), so it might ease the deployment:
Deploys and registers ALM components on a client machine.
Of course, I expect that OTAClient.dll and any other client components are legal to use only for licensed clients of HP ALM - still searching on this topic and about the licence count influence of the custom apps using this dll.
I have never read the EULA provided with OTA API, and this is my own personal opinion; but as a hunch I would say it is not supposed to be a problem.
As long as you distribute only your own binaries and non of those owned by HP.

Development and Test Environment Best Practices?

This question is for ASP.NET and SQL Server developers. What are your best practices with respect to setting up your development and test environment? I'm interesting in the following issues:
How many tiers do you recommend and what goes on on each tier? Just dev, test, and production or perhaps dev, test, staging, and production?
Which types of applications and/or servers should run on actual physical hardware and which can get away with a VM?
What are your strategies for loosely coupling users from web sites, web developers from their web/app/DB servers, and DB developers from their DB servers?
How do developers stay "DRY?" (no deodorant jokes, please ;)
What are the pros and cons to putting web, app, and DB servers on their own machines? Does putting servers on separate machines in order to minimize contention for a machine's resources trump any NIC and network latencies that might be introduced by putting them on different machines?
How do you configure your web apps to minimize contention for resources (e.g. virtual directories, separate application pools, etc.)
How and how often do you refresh your databases on each tier? Do you just refresh the data or both the data and objects?
Thanks.
I can't comment on all of these but here's what i've found to work best in my experience.
1) Depends on your resources but ideally i like to have 4.
Dev is hyper flexible and owned by your dev team. It can get updated whenever they feel is best or as features are completed.
QA is updated on a scheduled or delivery basis depending on your process. If you do waterfall its updated when your in the testing phase, if you do iterative agile its updated each iteration. It should mimic prod as closely as possible but you may be able to get away with some compromise (see #2)
Staging should be identical in every way to prod. It should even use real production data if possible (potentially restored from a recent backup of the true production environment.) It should be used for acceptance testing prior to any release.
& Prod
2) Dev can be on a VM usually. QA can too most of the time. Staging and prod should match. I've seen folks run prod on VMs before, it depends on your resources and the demand for your app.
3) Our devs use a backup of prod on local SQL servers for development. This keeps everyone off of a central dev SQL server. Dev web and dev sql are separate boxes (just out of necessity they manage a bunch of projects.) same with QA, Staging and Prod.
4) A lot of testing and communication. If you have one small / medium team this isnt that hard. If you have lots of teams look at something like scrum, formal code reviews, something to keep communication going between teams. Don't treat DRY issues like suggested fixes, treat them like bugs, that need to be fixed. You'll spend way more time maintaining the code than writing it up front so treat maintenance as a 1st class citizen and make sure management is on board with that.
5 & 6) Not really qualified to comment
7) Dev whenever the teams need to, QA and up on a schedule depending on deployments. QA is every iteration / sprint, Staging and Prod is every release.

Stress Testing Managed Host with VS 2008

Is it possible to stress test a managed host (not my own machine) using VS 2008?
Although it is not free:Visual Studio 2008 Team System Test Edition is a good stress test tool.
VS 2008 doesn't really have built-in tools to test at any volume, but there are free ones out there:
Apache Bench - Made by apache, but usable against any web server
Web Capacity Analysis Tool - Microsoft
As stress test tool I liked "Grinder" a lot and found it easy to use.
You will be able to stress test against the external machine, however you will not be able to see any perfmon stats from the target machines.
You will also probably cause serious consternation with your hosting provider as a good load test stresses the network pretty hard. You may want to talk to them before going ahead.
From an external source you will also likely include network effects in your test, so different times of day will result in different network loads and different results.
Ideally you would want to install a load test controller and agent on a server on the same switch as the target to get high loads on your application. This does require a hefty licence however.
Visual Studion 2010 has better licencing for the Load Test Agent, but may be more expensive than you want.

Resources