Synchronize defects and requirements between TFS and HP QC - hp-quality-center

We use TFS 2010 for (development and requirements) and HP Quality Center for Testing and defects. We currently use Juvander TFS Bug Synchronizer for synchronizing defects and requirements between TFS 2010 and HP Quality Center 10.00.
The problem with Juvander is that it gets slow as the number of projects increase.
I am asked to investigate alternative tools to sync between TFS and HP QC.
I have looked into the HP QC Synchronizer. But it cannot sync requirements between TFS and HP Quality Center.
I want to know if anyone uses any such synchronizers. Any help is appreciated.
Thanks in advance
Regards

please look at the use case given below. As far as I understand this is what you are trying to achieve:
The Product Manager creates a ‘requirement’ in TFS and attaches a screenshot that includes communication details from the customer.
The development team receives the ‘requirements’ and starts work on it.
The ‘requirement’ also synchronizes to HPQC.
The QA team creates ‘test cases’ against the ‘requirement’ and link the ‘test cases’ to the ‘requirement’.
Once the development team completes work on the ‘requirement’, it changes the status of ‘requirements’ in TFS to ‘closed’.
The QA team runs the ‘test case’ against the closed ‘requirement’. If the ‘test cases’ passes, the QA team changes the status of ‘requirements’ in HPQC to complete, which automatically updates the status of ‘requirements’ in TFS. If the ‘test cases’ fails, the QA team analyzes the issue and logs a ‘defect’ in HPQC, which reopens the status of ‘requirements’ in TFS
If my assumption is right, please checkout this datasheet that talks about TFS-HPQC integration using an integration solution, OIM, in detail.

Related

HP QC ALM How can I list the recently checked in scripts and tests?

We are working with HP QC ALM v11.00. We have enabled version control in the same.
I need to keep an eye on all scripts (Business Components and Test Resources) that are recently modified in the same. I need to do this to have an overview on the changes happening in the code, make sure it conforms to the coding standards, is reviewed etc.
I cannot seem to find an obvious way to do this. Can someone help?
Thanks in Advance

Is TideSDK defunct?

I am interested in creating a desktop application using HTML5+webkit, and I'd like to be able to build a stand-alone executables for various target platforms like a .exe file for Windows and a .dmg image for Mac OS. I have played around with node-webkit, which seems nice except for the packaging / distribution portion. I also stumbled on TideSDK, but that project seems to be inactive. For example, the latest release I saw was a beta from November of 2012. Yet, it seems the core developers have switched to developing TideKit instead.
Does anyone here know if TideKit is intended as a replacement for TideSDK? Is TideSDK going away? etc.
Well, TIDE is now officially a dead project. I just got this email about 15 minutes ago.
TideKit.com and TideKit have been discontinued.
TideKit was software for developing apps for all platforms
simultaneously with a single base of code written in JavaScript.
The scope and complexity of the product made it difficult to
assemble the platform all at once. This stemmed from a holistic
approach to app development for all platforms. While creating a
platform for JavaScript developers, much of the core engineering is in
a variety of lower level languages that affect the speed of
development. We considered delivering parts of our platform as we
reached milestones, but this was not suitable for the start of trials.
We were widely criticized for not revealing our technical innovation
in advance of our release. In a competitive environment, revealing
advantages as you go can also mean assimilation as you go. We had
already witnessed how quickly our technical advantages could be
assimilated by competitors to our open source TideSDK product.
Therefore, we held back with a view of delaying the duplication of
features by competitors, increasing our technical barriers and working
to protect our IP and business case until we felt we were ready.
In a startup, we talk about a Minimum Viable Product (MVP). In our
case, our minimum viable product was much larger and more difficult to
achieve. In total, approximately three years of research and
development was committed with multiple developers working greater
than full time hours. A factor that extended the development was an
expansion of scope that aimed to lower friction in the app development
process.
In Feb 2014, we created a system to queue developers with
reservation system for the earliest possible access to TideKit. Our
goal was to provide an early trial when it became available. Since the
development itself was complex, we could not provide a date when
ticket holders could start the trial process – but it would be
following our betas, then moving forward as we scaled the platform.
We were clear with our language on the site concerning reservations.
As a result, we expected little confusion about what was being
purchased, our expectations of timing to market, or the terms of
purchase for a reservation ticket. Purchasers were not paying for our
product at this point, but for their position in a queue for a trial
of our new technology. We also included a refund policy to ensure the
terms of purchase for your ticket were available. The wait has been
long, but not nearly as long as other difficult engineering challenges
including Myo that pre-sold their product and were also delayed before
successfully rolling out.
Throughout the development cycle we provided updates of our status
via posts roadmap page, email to our ticket holders and communications
on our social channels. We did our best as a team to open ourselves to
questions and maintain a social presence.
At the end of May 2015, we communicated our strategy to execute a
series of focused betas that would have seen the platform revealed
publicly and incrementally. We were at a stage that parts of the
platform needed developer feedback as we rolled these out
consecutively.
In the days preparing for our first public beta, we recognized the
extent to which our brand had been poisoned by our timing to market. A
campaign of negativity that had begun several months earlier with
followers and ticket holders had taken its toll on our team, brand,
and business.
We believed the beta releases would soon bring an end to the
negative talk. On July 8 and 9 we faced further eruptions on social
media that reached the tipping point. With the discussion no longer
about the product nor its future, this was far more serious.
We failed to bring the product quickly enough for you. As a result,
we came to the serious decision to discontinue TideKit and dissolve
our company.
We wish to thank everyone involved that worked on the product and with
our team. This includes businesses, entrepreneurs and supporters of
our vision for app development.
Your TideKit Team
you are right, TideSDK is aging and pretty inactive today. And you're also right, we as a core team completely focus on TideKit now. TideKit is the future!
If you want to know the full story about why we stopped working on TideSDK and started TideKit, I recommend you to read our first Q&A. There you'll also find an answer about how we compete with node-webkit:
https://blog.tidekit.com/post/your-questions-our-answers-01
We've just reached the highest HTML5 score any app development platform ever achieved. If you want to know more about builds, like the ones you mentioned for Windows and OS X, you should read this
Desktop Builds
https://blog.tidekit.com/post/from-a-desktop-perspective-tidekit-for-tidesdk-developers
There is a new kid on the block for this sort of projects: atom-shell Based in nodejs and used to create the great Atom editor
Technical differences with node-webkit: https://github.com/atom/atom-shell/blob/master/docs/development/atom-shell-vs-node-webkit.md
Presentation at JSLA about "Native NodeJS Apps": http://vimeo.com/97881078
If you look at this blog post, they talk about how unsustainable the economical situation is
http://www.tidesdk.org/blog/2013/04/11/tidesdk-in-numbers/
and I can't find the tweet that was stating the reasons behind the transition from one project to another. But I guess that the blogpost speaks for itself.
Anyway, I'm delivering a project written in node-webkit ( because I starded on Tide but for the obvious reasons I had to switch ) and I'm using grunt for packaging and in the end is not that bad.
Electron (http://electron.atom.io/) is the new way to go.
I also had an app running on TideSDK (https://github.com/vinyll/worktimer.titanium) and I'll have to migrate it to Electron.

Proper DTAP setup for Content Delivery

I've had this setup, but it didn't seem quite right.
How would you improve Content Delivery (CD) development across multiple .NET (customer) development teams?
CMS Server -> Presentation Server Environments
CMS Production -> Live and Preview websites
CMS Combined Test + Acceptance (internally called "Staging") -> Live ("Staging")
CMS Development (DEV) -> Live (Dev website) and sometimes Developer local machines (laptops)
Expectations and restrictions:
Multiple teams and multiple websites
Single DEV CMS license (typical for customers, I believe?)
Enough CD licenses for each developer
Preferably developer could program and run changes locally--was this a reasonable expectation?
Worked
We developed ASP.NET pages using the Content Delivery API against the same broker database for local machines and CD DEV. Local machines had CD dlls, their own license files, and ran/debug fine with queries and component presentation calls.
Bad
We occasionally published to both the Dev presentation server and Developer machines which doesn't seem right now, but I think it was to get schema files on our local machines. But yes, we didn't trust the Dev broker database.
Problematic:
Local machines sometimes needed Tridion-published pages but we couldn't reliably publish to local machines:
Setting multiple publication destinations for a single "Local Machine" publication target wouldn't work--we'd often take these "servers" home.
VPN blocked access to laptops offsite (used "incoming" folder at the time).
Managing publication targets for each developer and setting up CD for each new laptop was good practice (as in exercise, not necessarily as a good idea) but just a little tedious.
Would these hindsight approaches apply?
Synchronize physical files from Dev to local machines on our own?
Don't run presentation sites locally (localhost) but rather build, upload dll, and test from Dev?
We were simply missing a fourth CMS environment? As much as we liked our Sales Guy, we weren't interested in purchasing another CM license.
How could you better setup .NET CD for several developers in an organization?
Edit: #DominicCronin pointed out this is only a subset of a proper DTAP setup. I updated my terms and created a separate question to clarify DTAP with Tridion.
The answer to this one is heavily depending on the publish model you choose.
When using a dynamic model with a framework like DD4T you will suffice with just a single dev environment. There is one CMS, and one CD server in that environment and everything is published to a broker database. The CD environment could be used as an auto build system, the developers purely work locally on a localhost website (which gets the data from the dev broker database), and their changes are checked in an VCS (based on which the auto build could be done).
This solution can do with only a single CMS because there is hardly any code developed on the CMS side (templates are standardized and all work is done on the CD side).
It gets more complex if you are using a static or broker publishing model. Then I think the solution is to split Dev up in Unit-Dev and Dev indeed as indicated by Nuno and Chris.
This solution requires coding on both the CMS and CD side, so every developer has a huge benefit in having its own local CMS and CD env.
Talk to your Tridion account manager and agree a license package that suits the development model you want to have. Of course, they want to maximise their income, but the various things that get counted are all really meant to ensure that big customers pay accordingly, and smaller customers get something they can afford at a price that reflects the benefits they get. In fact, setting up a well-thought-out development street with a focus on quality is the very thing that will ensure good customer satisfaction and a long-running engagement.
OK - so the account managers still have internal rules to follow, but they also have a fair amount of autonomy in coming to a sensible deal with a customer. I'm not saying this will always work, but its way better than blindly assuming that they are going to insist on counting every server the same way.
On the technical side - sure, try to have local developer setups and a common master dev server a-la Chris's 5th. These days, your common dev environment should probably be seen as a build/integration server: the first place where the team guarantees all the tests will run.
Requirements for CM and CD development aren't very different, although you may be able to publish to multiple developer targets from one CM if there's not much CM development going on. (This is somewhat true of MVC-ish approaches, but it's no silver bullet.)

sync defects from QC

our QA team use QC to manage defects.
our DEV team use VS2010,TFS2010(for source control only), SharePoint.
the QA team is behind a private network with no connection to DEV network.
what is the best (simple and cheap) way to sync just the defects between the 2 teams?
HP ALM Synchronizer is a tool provided by HP for defect and requirement synchronization between TFS and QC 10 / ALM 11.
It's free and and relatively simple once you read the manual. You can try and skip manual reading, but I don't recommend it. Both QC and TFS are complicated products, and as such sync between the two is some what complicated as well.

Best Source Control Solution for Oracle/ASP.NET Environment? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am trying to plan a way for 5 developers to use Visual Studio 2005/2008 to collaboratively develop an ASP.NET web app on a development web server against an Oracle 8i(soon to be 10g) Database.
The developers are either on the local network or coming in over a vpn (not a very fast connection),
I evaluated the latest Visual SourceSafe, but ran into the following gotchas:
1) We can't use decentralized development because we can't replicate a development oracle database to all developers computers. Also, the vpn is too slow to let their local app instances connect to the database server.
2) Since VSS source code not on the file system, the only way to debug it is to build the app and run debugger, which only one developer can do at a time on a centralized development server. This is unacceptable. We tried using shadow folders so that every time a file is checked in it gets published to the app instance on the development server, but this failed for remote developers on the vpn.
3) Since the developers do a lot of web code, it is important for productivity reasons that when they SAVE a file, they should be able to immediately see the change working on the development server.
4) No easy way to implement a controlled process for pushing files to the production server.
Any suggestions on a source control solution that would work under these contraints?
Update: I guess since development is forced to be on the server, we need to go with a "Lock and Check In" model. So which source control solution would work best for "Lock and Check In' scenarios?
Update: Does Visual SVN support developing centrally against a development server? As in, the dev can immediately see his update on the development server after saving in VS?
I have used Subversion and TortoiseSVN and was very pleased.
Is point 1 due to an issue with your database schema (or data) ?
We can't use decentralized development because we can't replicate a development oracle database to all developers computers.
If not, I strongly suggest that every developer has its own environment (Visual Studio, Oracle...) and use your development server for integration purposes. Maybe you could just give them a subset of the data, or maybe just the schema scripts.
Oracle Express Edition is perfectly fit for this scenario. Besides, sharing the same database violates rule #1 for database work, which in my experience should be enforced anywhere possible.
As Guy suggested, have an automated build allowing any developer to recreate its database schema at any time.
More very useful guidelines can be found here (include rule #1 above).
Define your development process so that parallel development is possible, and only use locks as a last resort.
I'm sorry if you already envisioned these solutions and found them unfit to your situation, but I really felt the urge to express them just in case...
Visual Source Safe is the spawn of Satan.
Look at Subversion, and Visual SVN (with Tortise SVN). Sure, Visual SVN costs a bit - $49 per seat - but it is a great tool. We have a development team of 6 programmers, and it has been a great boon to us.
If you can spend the money, then Team Foundation Server is the one that works best in a Visual Studio dev environment.
And based on personal experience, it works beautifully over VPN connections. And you can of course have automated builds going on it.
I would say SVN on price (free), Perforce on ease of integration.
You will undoubtedly hear about GIT and CVS as well and there are good reasons to look at them.
Interesting -- it sounds you are working on a web site project on the server, and everyone is working on the same physical files. I agree that SVN is far superior to VSS and really good to work with, but in my experience it's really geared toward developers working on a copy of the code locally.
VSS is a "lock and check in" type of source control, while SVN and TFS and most others are "edit and merge" -- devs all get copies of the source, edit the files as needed, and later merge their changes in to source control, and if someone else has edited the file in the meantime they merge the changes together.
From a database standpoint, I assume you are checking in your database scripts, then have some automated build packaging and running them (or maybe just a dev or DBA running them manually every so often). In this case, having the developers have a local copy of the scripts that they can edit and merge using SVN or TFS makes sense.
For a team working on a shared copy of the source code on a development server, though, you may get into problems using edit and merge -- a "lock and check in" model of source control may work better for you. Just not VSS, from a corruption and stability standpoint.

Resources