Association of any kind inbetween QC projects? - hp-quality-center

I want to associate one QC project with another (e.g., manual testing and automation testing). I use QC 11.00
I would like to know what kind of association there can be between two QC projects (on the same domain), so I do not have to maintain two projects and then copy paste what I need e.g. common repositories etc.

I'm not sure that you can do this. A project in QC is supposed to be a self-contained entity, that is, there is no way (that I know of) that you can automatically move data between projects.
Sure, you can copy and paste data, as well as create a project with another one as base, but that is probably not what you want.
I would rather have manual testing and automation in the same project, which makes more sense I think. The point is that the project is supposed to identify the test object, rather than the test methodology - the latter can be done better in Test Plan where you specify a Test Type when you create your test.
This way, you will have all defects and test reports for your test object in the same project which will make it all the easier to track what is going on.

As a general rule; you would want to keep all project data for one project in that project; and, you want project data from that project to be unique and separate from all other projects.
That being said... if you really wanted to do this (and were able to convince a QC subject matter expert that it was a good idea?), then it should be a relatively simple matter to amend the workflow with additional code to interface with another project.

Related

What is the best practice for transferring objects across R projects?

I would like to use R objects (e.g., cleaned data) generated in one git-versioned R project in another git-versioned R project.
Specifically, I have multiple git-versioned R projects (that hold drake plans) that do various things for my thesis experiments (e.g., generate materials, import and clean data, generate reports/articles).
The experiment-specific projects should ideally be:
Connectable - so that I can get objects (mainly data and materials) that I generated in these projects into another git-versioned R project that generates my thesis report.
Self-contained - so that I can use them in other non-thesis projects (such as presentations, reports, and journal manuscripts). When sharing such projects, I'd ideally like not to need to share a monolithic thesis project.
Versioned - so that their use in different projects can be independent (e.g., if I make changes to the data cleaning for a manuscript after submitting the thesis, I still want the thesis to be reproducible as it was originally compiled).
At the moment I can see three ways of doing this:
Re-create the data cleaning process
But: this involves copy/paste, which I'd like to avoid, especially if things change upstream.
Access the relevant scripts/functions by changing the working directory
But: even if I used here it seems that this would introduce poor reproducibility.
Make the source projects into packages and make the objects I want to "export" into exported data (as per the data section of Hadley's R packages guide)
But: I'd like to avoid the unnecessary metadata, artefacts, and noise (e.g., see Miles McBain's "Project as an R package: An okay idea") if I can.
Is there any other way of doing this?
Edit: I tried #landau's suggestion of using a single drake plan, which worked well for a while, until (similar to #vrognas' case) I ended up with too many sub-projects (e.g., conference presentations and manuscripts) that relied on the same objects. Therefore, I added some clarifications above to my intentions with the question.
My first recommendation is to use a single drake plan to unite the stages of the overall project that need to share data. drake is designed to handle a lot of moving parts this way, and it will be more seamless when it comes to drake's decisions about what to rerun downstream. But if you really do need different plans in different sub-projects that share data, you can track each shared dataset as a file_out() file in one plan and track it with file_in() in another plan.
upstream_plan <- drake_plan(
export_file = write_csv(dataset, file_out("exported_data/dataset.csv"))
)
downstream_plan <- drake_plan(
dataset = read_csv(file_in("../upstream_project/exported_data/dataset.csv"))
)
You fundamentally misunderstood Miles McBain’s critique. He isn’t saying that you shouldn’t write reusable code nor that you shouldn’t use packages. He’s saying that you shouldn’t use packages for everything. But reusable code (i.e. code that you want to reuse) absolutely belongs in packages (or, better, modules), which can then be used in multiple projects.
That being said, first off, pay attention to Will Landau’s advice.
Secondly, you can make your RStudio projects configurable such that they can load data based on paths given in a configuration. Once that’s accomplished, nothing speaks against hard-coding paths to data in different projects inside that config file.
I am in a similar situation. I have many projects that are spawned from one raw dataset. Previously, when the project was young and small, I had it all in one version controlled project. This got out of hand as more sub-projects were spawned and my git history got cluttered from working on projects in parallel. This could be to my lack of skills with git. My folder structure looked something like this:
project/.git
project/main/
project/sub-project_1/
project/sub-project_2/
project/sub-project_n/
I contemplated having each project in its own git branch, but then I could not access them simultaneously. If I had to change something to the main dataset (eg I might have not cleaned some parts) then project 1 could become outdated and nonfunctional. Once I had finished project 1, I would have liked it to be isolated and contained for reproducibility. This is easier to achieve if the projects are separated. I don't think a drake/targets plan would solve this?
I also looked briefly into having the projects as git submodules but it seemed to add too much complexity. Again, my git ignorance might shine through here.
My current solution is to have the main data as an R-package, and each sub-project as a separate git-versioned folder (they are actually packages as well, but this is not necessary). This way I can load in a specific version of the data (using renv for package versions).
My folder structure now looks something like this:
main/.git
sub-project_1/.git
sub-project_2/.git
sub-project_n/.git
And inside each sub-project, I call library(main) to load the cleaned data. Within each sub-project, a drake/targets plan could be used.

Can we compare the contents of two folders in spotfire?

I have two environments. One is development and another is production. Lets say I have folder in production which has all my metadata like ILs, joins, DS, Analysis, scripts etc. Now in development I have the same folder but with new enhancements done.
Now, I want to compare that what are the changes that have been done and as per the result I will be able to understand the impact.
So, could you please tell me that what is way to compare that two folders of development and production environment?
For the requirements posted here, you can create information link on top of LIB_ITEMS table to fetch details of library item details from Spotfire database. An another set of activity is performed at this link, but approach can be used for your requirements as well.

why we are adding solution folder, and responsibilities share ,test folders inside solution

I have couple of the those pic I loaded.First my question about first pic.why those folders seems with dot presented.when I look ad those folsers,it says they are "solution folders".why we need this folder ,for example I am creating a class libabrary as a project.why should I decribe this project inside the "solution folders".
first pic.
second pic
my second question about this solution struture.which created by Layered Architecture Solution Guidance 2010 download from here http://visualstudiogallery.msdn.microsoft.com/c8c473b5-21a1-447a-8b24-33b43411ee7f
It's already had bll,dal,bo, folder,why we need a share folder.which classes should we put inside it and I also see a test folder.whats primary responsibilityies on this solution.and how it is used.
thank you all.
First Question: Those are solution folders, which are just a way of logically separating the different parts of the solution (layers, etc.). You can only have physical folders inside a project (the ones that aren't dotted-lined): Visual Studio Solutions Folder as real Folders
Second: A shared folder could be used for classes that don't neatly fit inside either the BLL or the DAL. I can't think of a reason off the top of my head for one, but I've seen examples where shared classes are created in RIA services for Silverlight.
Bonus: The tests folder is for holding your Unit Tests. Look up Unit Testing for ideas. It is very useful to write Unit Tests for your code to provide a first line of bug-fighting whenever you create (do my tests run successfully on my new code?) or modify (do my tests still run successfully on my existing code after this change I just made). NUnit is a popular open-source Unit Test framework, and MS provides its own Test Project Unit Test framework built into Visual Studio.

Need help with a large ASP.NET application

We have an ASP classic ERP (very large application) that we want to rewrite using ASP.NET.
I am looking for a way to organize the application so we are going to be able to separate every program / webpage (over 400) from each other. Every program needs to be independent because many developers will work on the project at the same time.
Visual Studio seems to make a DLL for every assembly so I was wondering if it’s a good idea to make a huge solution with one project per DLL.
Ex. :
Customers.aspx + Customers.aspx.vb (compiled) for presentation
Customers.DLL for the object entity
CustomersManager.DLL for business logic
CustomersData.DLL for data access
This way, we would be able to deploy every program separately without altering the others. We would also have over a thousand DLL to manage…
Does it seem to be a good solution for a large scale application?
Anyone has a better idea?
Thanks
Source control was invented exactly for the purpose of having multiple develops concurrently work on large solutions. I do appreciate the value of having components that can be deployed independently, but perhaps the value is lost as the number of independent components that require maintenance approaches the hundreds and thousands?
Separating the application into separate presentation/business logic/DAL DLLs does make sense on a per-module basis, but not usually on a per page basis.
Consider the different functional areas of your application that are likely to share code and start there (one set of projects for each).
That seems like a huge unmanageable nightmare to me.
I've been a part of several large .Net projects and the way that works the best is, like JeffN825, use some sort of source control, along with classes that support your model (database) directly.
Folders under the project root can help you split things up logically "/Customers", "/Orders", etc.
If you want to make separate projects for your classes, that is also done quite a bit. Have a separate project containing all of your database objects. Create another project for Business Logic. Actually create several Business Logic projects if you feel you need it "CustomerBO", "OrderBO", etc.
But managing over 1000 dlls and their associated web pages...that's going to be a nightmare.
I think the question was asking whether having a seperate DLL for every layer of every page was a good architecture, which it is not, if for no other reason than Visual Studio will more likely than not crawl to halt as it tries to load 100's of seperate projects (I shudder to think of what that would do, and how impossible it would be to maintain all those DLL's). Now a more reasonable solution would be to have a DLL for each layer and seperate each page's code into a different file and use a source control system. This would allow developers to share code even with the worst of breed source control systems. If your source control system has decent Branching/Merging support (i.e. not SourceSafe) like TFS, SVN, Git you don't really even after worry about people working on the same file simulatanously, and then you can organize your code by function not page. I'm hazarding to guess from the question, that there is probably an astronomical amount of duplicated code that could be simplified and made easier to maintain by breaking a rigid connection to the web code and reusing code. It can be amazing to see how much less code there will be. You can do the same on UI side with judicious use of User Controls to encapsulate shared functionality. Plus moving from ASP to .NET, you pickup things like SiteMap controls that should reduce the code footprint as well.

How to share code with continuous integration

I've just started working in a continuous integration environment (TeamCity). I understand the basic idea of not getting so abstracted out in your code that you are never able to build it to test functionality, etc. However, when there is deep coding going on, occasionally it will take me several days to get buildable code--but in the interim other team members may need to see my code.
If I check the code in, it breaks the build. However, if I don't check it in, my team members are unable to see the most recent work. I'm wondering how this situation is best dealt with.
A tool like Code Collaborator (Google link, smartbear.com is down..) would allow your peers to see your code, without you committing it. Instead, you just submit it for review.
It's a little extra trouble for them to run it though.
Alternatively, setup a second branch/fork of your codebase for you to work in, your peers can sync to that, and it won't break the build server. When you're done working in your own branch, you can merge it back with mainline/trunk/whatever.
In a team environment, it is usually highly undesirable for anybody to be in an unbuildable state for days. I try to break large code deliveries to as many buildable check-ins as I can. At minimum, create and check in your interfaces even if you do not have the implementation ready so others can start to code against them.
One of the primary benefits of Continuous Integration is that it shows you when things break and when things are fixed. If you commit those pieces of code that break the system, other developers will be forced to get it into a working state before continuing the development. This is a good thing because it doesn't allow code changes to be made on top of broken things (which could cause issues where a co-workers code worked on the broken system, but doesn't work once the initial break is fixed).
This is also a prime example of a good time to use branches/forks, and simply merge to the trunk when all the broken things are fixed.
I am in exactly the same situation here.. As build engineer I have this working beautifully.
First of all, let me break down the branches / projects. #Dolph Mathews has already mentioned branching and tbh, that is an essential part of getting your setup to work.
Take the main code base and integrate it into several personal or "smaller" team branches. i.e. branch_team_a, branch_team_b, branch_team_c
Then set up teamcity to build against these branches under different project headings. So you will eventually have the following: Project Main, Project Team A, Project Team B, Project Team C
Thirdly, then setup developer checkins so that they run pre-commits builds for the broken down branches.. You can find the TC plugin for this under tools and settings.. They have it for IntelliJ or VS.
You now have your 3-tier setup..
- Developer kick starts a remote-run pre-commit build from their desktop against their project. If it passes, it get's checked into the repository i.e. branch_team_a
- Project Team A passes after several check-ins; at which point you integrate your changes from branch_team_A to main branch
- Project Main builds!
If all is successful then you have a candidate release.. If one part fails, projects a, b or c. it doesn't get checked into main. This has been my tried and tested method and works everytime. It also vastly improves team communication.

Resources