BizTalk Mutiple Host Instance Affecting Each other - biztalk

I'm very new to BizTalk. I have a problem here:
Both PC12-4 and PC12-0 are working on the same project at the same time. If they change code and do test at the same time, the result will be affected by each other. From http://msdn.microsoft.com/en-us/library/aa561042.aspx I noticed that it's not recommended to have multiple host instances in one host.
I wonder is there any way to allow the result not affected? The aim is for multiple people working and testing the same program at the same time. Because we finished one part of the application and the users want to test it now while we are doing the second part. The users test results are affected by our new changes.
Many thanks!

If I interpret you situation correctly, the problem is not with the Host/Instance configuration, rather, what you are trying to do, use PC12-4 and PC12-0 for the same project but different purposes, DEV and TEST, is not supported.
Meaning, you can't have different versions of the same app installed on two different servers in the same Group. There is no way around this and there is no way to make it work in the way you want. Sorry.
What you need to do is split PC12-4 and PC12-0 into two separate BizTalk Groups, meaning two separate sets of SQL Server databases in two separate instances of SQL Server. One Group for DEV and the other for user TEST.
But, even then, you may still have problems because multiple developers sharing a single BizTalk Group/Server is not workable scenario. Each developer should have their own full stack, Windows, SQL Server, BizTalk Server and Visual Studio. The best way to achieve this is dedicated developer VM's.

Related

How Do I Copy Corda Production Env To Dev For Debugging?

Very often in enterprise applications, something doesn't work as expected and you need to debug and create a fix.
Obviously you can't test in production as you might have to save something in order to debug it, and you don't want to be responsible for accidentally sending a $1M transaction by mistake!
With traditional applications, this process is done by copying the database from production to a dev environment (maybe redacting sensitive data) and duplicating and debugging the problem there.
In Corda you have multiple nodes involved, the nodes have specific keys and the network has a truststore hierarchy.
What is the process to replicate the production structure and copy all the data from production to development in order to debug?
I think it depends on how complicated your setup is.
The easy way to rigorously do this is within a mocknetwork during unit testing. (this is the most common setup, example here: https://github.com/corda/samples-kotlin/blob/master/Advanced/obligation-cordapp/workflows/src/test/kotlin/net/corda/samples/obligation/flow/IOUSettleFlowTests.kt)
Something I like to do a lot is to use intellij breakpoints in the flow / unit tests in order to be sure something works the way i expect.
Another way to do it is potentially using the testnet (again depends on your use case) https://docs.corda.net/docs/corda-os/4.7/corda-testnet-intro.html
Another way to do this is to write up a script to perform all of the transactions you want the nodes to do while running them locally on your machine just by using the corda shell on all the local nodes and feeding the transactions directly that way.
Copying data from production to apply to local network is gonna be hard because you can't fake all the transactions / state history without a lot of really painful editing of all the tables on each node.

Is it possible to use Kentico's staging API to pull serialized object information from a target server?

We have a large, complex Kentico build which uses Kentico's Continuous Integration locally, and Kentico's Staging module to push Kentico object changes through various environments.
We have a large internal dev team and have found that occasionally (probably due to Git merging issues) certain staging tasks aren't logged. When dealing with large deployments this is often not obvious until something breaks on the target server.
What I'd like is to write a custom module which can pull certain data from a target server (e.g. a collection of serialized web parts). I can then use this to compare with the source server to identify where objects are not correctly synchronized. I'd hoped this might be possible using the web services already exposed by Kentico which handle the staging sync tasks.
I've been hunting through a few namespaces in the Kentico API (CMS.Synchronization, CMS.Synchronization.WSE3 etc.) but it's not clear if what I'm trying to do is even possible. Has anyone tried anything similar. If so, could you point me in the right direction?
Instead of writing your own code/tool for this I'd suggest taking advantage of what someone else has already done. This is like Red Gate's SQL Compare for Kentico BUT on steroids. It compares, database data, schema AND file system changes on staging and target servers.
Compare for Kentico

Multiple deployers single Content Delivery database (Broker DB)

In the publishing scenario I have, we have multiple deployers pushing content to both file system and database (broker). Pages and Binaries are put on the file system, everything else in the Broker. We have one of the deployers putting the content into the database. Is this the recommended best practice?
If the storage configurations in all deployers also put the content into the database, how does Tridion handle this? Could this cause duplicate entries, locking failures etc?
I'm afraid at the time of writing I don't have access to an environment to test how this would work.
SDL best practice is to have a one-to-one relationship between a deployer and a publication; that means so long as two deployers do not publish the same content (from the same publication) then they will not collide providing, if a file system, there is separation between the deployed sites e.g. www/pub1 & www/pub2.
Your explanation of your scenario needs some additional information to make it complete but it sounds most likely that there are multiple broker databases (albeit hosted on a single database server). This is the most common setup when dealing with multiple file systems on webservers, combined with a single database server.
I personally do not like this set up as I think it would be better to host file system content in a shared location & share single DB. Or better still deploy everything to the database and uses something like DD4T/CWA.
I have seen (and even recommended based on customer limitations) similar configurations where you have multiple deployers configured as destinations of a given target.
Only one of the deployers can write to the database for the same transaction, otherwise you'll have concurrency issues. So one deployer writes to the database, while all others write to the file system.
All brokers/web applications are configured to read from the database.
This solves the issue of deploying to multiple servers and/or data centers where using a shared file system (preferred approach) is not feasible - be it for cost or any other reason).
In short - not a best practice, but it is known to work.
Julian's and Nuno's approaches cover most of the common scenarios. Indeed a single database is a single point of failure, but in many installations, you are expected to run multiple schemas on the same database server, so you still have a single point of failure even if you have multiple "Broker DBs".
Another alternative to consider is totally independent delivery nodes. This might even mean running a database server on your presentation box. These days it's all virtual anyway so you could run separate small database servers. (Licensing costs would be an important constraint)
Each delivery server has it's own database and file system. Depending on how many you want, you might not want to set up multiple destinations/deployers, so you deploy to one, and use file system replication and database log shipping to mirror the content to the rest.
Of course, you could configure two deployment systems (or three) for redundancy, assuming you can manage all the clustering etc.
OK - to come clean - I've never built one like this, but I'm fairly sure elements of this kind of design will become more common as virtualisation increases, and licensing models which support it. (Maybe we have to wait for Tridion to support an open source database!)

Use fake domain locally in Visual Studio without modifying the host file directly

I have an application that runs here http://localhost:10205/ but I need it run locally as http://somethingelse.com/.
This needs to happen on other computers as well without the need to alter the host file.
How do I do that?
If you are all within the same network, you can add an A Record to your domain controller. Beyond that, there's not much you can do when you're dealing with multiple endpoints. As far as actually performing that task, you may want to discuss on serverfault.

How to avoid chaotic ASP.NET web application deployment?

Ok, so here's the thing.
I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project
This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code.
So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it.
We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets.
I don't have idea how to put the pieces together
I would like to:
Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed.
Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere.
Copy the site files and executing the generated sql script in an easy and automated way.
I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy.
If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option.
I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy.
Any help will be really appreciated,
Thank you all,
Regards
Have you heard of the term Multi-Tenancy? It might be worth look that up to see if that applied to your "Multiverse" especially if one universe is never accessed by another...
See:
http://en.wikipedia.org/wiki/Multitenancy
http://msdn.microsoft.com/en-us/library/aa479086.aspx
UPDATE:
If the application and database is the same for each client (or Tenant) I believe there are applications that may help in providing the same code/db as an SaaS application? ie another application/configuration layer on top that can handle the deployments etc?
I think these are called Platform as a Service (PaaS) applications:
see: http://en.wikipedia.org/wiki/Platform_as_a_service
Multi-Tenancy in your case may be possible, depending on client security requirements, with a bit of work (or a lot of work):
Option 1:
You could use the one instance of the application, ie deploy the site once and connect to a different database for each client. You would need to differentiate each client by URL to isolate content/data byt setting a connection string for each etc. (This would reduce your site deployments to one deployment)
Option 2:
You could create both a single instance of the application and use a single database. You would need to add a "TenantID" to each table and adjust all your code to accept a TenantID to ensure data security/isolation. Again you wold need to detect/differentiate the Tenant based on the URL to set the TenantID for the session used for every database call. (This would reduce your site and database deployment to one of each)

Resources