How can I synchronize physical files manually in a Kentico web farm? - asp.net

We have a non-standard Kentico architecture which Kentico have advised is supported as long as synchronization of physical files between load balanced servers is disabled and handled manually. What is the correct way to manually synchronize web farm server files? I wondered about using a tool like DirSync but assume this would require one server to act as the primary, whereas with Kentico a new media file, for example, may be initially saved to any of the physical servers.
I'm hoping to identify a definitive solution to this issue. Thanks.

Kentico web farm by default synchronizes physical files automatically if the web farm is working properly. As each request can be served by different server Kentico serializes file binary into Database which is shared by all servers and then re-creates file on the server where it is missing.
I'm not aware of any situation where web farms are supported, but file synchronization isn't. It's either all or nothing, there is no middle solution.
Can you be more specific of why the synchronization of physical files is not working on your end? As long as all servers see the database (which they should otherwise the WF is not working at all) the file synchronization will work.
PS: If your files are not synchronized, go to Web farm -> Tasks application and check how many tasks are there. If there are no tasks (or very few which are being deleted constantly) then your web farms are working, if there are tasks older then few minutes then your web farms are not working at all.

I read the thread above and would recommend you take a look at this tool from BizStream: https://devnet.kentico.com/marketplace/modules/compare-for-kentico
I haven't gotten to play with it myself, but they are a top notch shop so I can bet its a top notch product.
Otherwise you are going to have to go the custom sync code.
We've tried to do moves via the SQL Tables and it is 'possible', but the amount of interconnected relationships just make it quite unrealistic to build or support.

Related

deploying flex applications

I have a Flex application which has to be deployed in some server. The typical form of access would be invoking the URL. How do i go about it?
Should I have multiple instances of the applications running on the same server/ deploying the application in diff servers and using a load balancer for routing?
If i must have multiple instances, how to do that?
On a given day, the application is expected to get around 2000-3000 hits. What are all the factors to be kept in mind while deployment?
Any pointers would be helpful.
thanks.
I'm actually a bit unsure what specifically you're asking, so I'll take your questions one by one.
I have a Flex application which has to
be deployed in some server. The
typical form of access would be
invoking the URL. How do i go about
it?
Put your SWF files on a web server. For best results export a release build first. Flash Builder makes this easy.
Should I have multiple instances of
the applications running on the same
server/ deploying the application in
diff servers and using a load balancer
for routing?
Probably not, but it depends. A SWF is just a binary asset. As far as the server is concerned it is no different from a JPG, gif, or PNG file. Whether or not you need a load balancer to serve the SWF depends on the size of the SWF, the amount of simultaneous hits you expect, other traffic on the server, bandwidth of your hosting provider, and probably a whole slew of other considerations that escape me at the moment.
If your SWF is making calls to the server--very common in Flex Applications--that may also be a consideration.
If i must have multiple instances, how
to do that?
If you have multiple instances of what? Of the SWF? Why would you need to do that? As I said, as far as the server is concerned a SWF is just a binary asset. In theory you could keep as many copies of the file on your server as you want, in practice most people just use a single one.
On a given day, the application is
expected to get around 2000-3000 hits.
What are all the factors to be kept in
mind while deployment? Any pointers
would be helpful.
That strikes me as low traffic site; however it depends what you're doing.
Despite my answer, I have to vote to close as your questions is vague and ambiguous. I'm not sure what you want to know.
I think you miss a basic information on your application.
As long as you create Flex/Flash application and you put them on your server they will always be SWF file executed Client Side.
So i think you don't have to wonder on the workload of your server because isn't your server that run the application but the Client PC.
As long as your server can manage 2000/3000 hit per day you can be quite sure that alway will run smootly.
Claudio.

How to avoid chaotic ASP.NET web application deployment?

Ok, so here's the thing.
I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project
This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code.
So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it.
We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets.
I don't have idea how to put the pieces together
I would like to:
Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed.
Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere.
Copy the site files and executing the generated sql script in an easy and automated way.
I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy.
If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option.
I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy.
Any help will be really appreciated,
Thank you all,
Regards
Have you heard of the term Multi-Tenancy? It might be worth look that up to see if that applied to your "Multiverse" especially if one universe is never accessed by another...
See:
http://en.wikipedia.org/wiki/Multitenancy
http://msdn.microsoft.com/en-us/library/aa479086.aspx
UPDATE:
If the application and database is the same for each client (or Tenant) I believe there are applications that may help in providing the same code/db as an SaaS application? ie another application/configuration layer on top that can handle the deployments etc?
I think these are called Platform as a Service (PaaS) applications:
see: http://en.wikipedia.org/wiki/Platform_as_a_service
Multi-Tenancy in your case may be possible, depending on client security requirements, with a bit of work (or a lot of work):
Option 1:
You could use the one instance of the application, ie deploy the site once and connect to a different database for each client. You would need to differentiate each client by URL to isolate content/data byt setting a connection string for each etc. (This would reduce your site deployments to one deployment)
Option 2:
You could create both a single instance of the application and use a single database. You would need to add a "TenantID" to each table and adjust all your code to accept a TenantID to ensure data security/isolation. Again you wold need to detect/differentiate the Tenant based on the URL to set the TenantID for the session used for every database call. (This would reduce your site and database deployment to one of each)

ASP.NET Web App Distribution

What is the simplest way to distribute an asp.net web application? I tried to look at some of the open source asp.net projects out there to see how they distribute their apps and how they do updates and they seem rather complicated to me (not for myself to perform but for non-technical users). A lot of them entail backing up the entire installed project, deleting specific folders and save parts of their web.config. I am hoping to find a solution that will make the update process specifically as simple as possible.
Thanks.
I am working on a project with a similar requirement now. We decided to use WiX to create an installer that can be run on the server or machine where the site is installed. WiX is incredibly powerful, but takes a bit to get the hang of.
There are plenty of other open source, and paid installer technologies as well. Here is a post with some info on a few.
CommunityServer provides a setup msi that will create a virutal directory, generate the SQL database and populate it with default data. Updating for point releases though is still a manual process involving an update.sql file and having everyone download then merge binary and static file changes.
They probably could have created an update msi too, but because so many people customize CommunityServer, it is probably better to let people merge changes themselves.
Do you mean in terms of breaking up the functionality into tiers that could be handled on separate machines, e.g. having 3 servers for a 3-tier architecture where one is the DB server, one handles middleware and the other handles the requests in ASP.Net? Another point here would be in going from a web server to multiple web servers in terms of scaling up.
Or are you referring to deployment?
It's a web application, man. Serve it publicly, require registration, and move on. Isn't that the point of the web application?

Gotchas: Upgrading from single servers to web farms

Our company currently runs two Windows 2003 servers (a web server & a MSSQL 8 database server). We're planning to add another couple of servers for redundancy / availability purposes in a web farm setup. Our web sites are predominately ASP.NET, we do have a few PHP sites, but these are mainly static with no DB.
Does anyone who has been through this process have any gotchas or other points I should be aware of? And would using Windows Server 2008 offer any additional advantages for this situation (so I can convince my boss to upgrade :) ?
Thanks.
If you have dynamic load balancing (i.e. My first request goes to server X, but my next Request may go to server Y or Z), you will find out that In-Proc Sessions do not work. So you will either need sticky Sessions (your load balancer will ALWAYS send me (=my session) to server X) or out-of-process sessions (i.e. stored in an SQL Server).
Like Michael says, you'll need to take care of your session. Ideally make it lean and store out of process. You'll have similar challenge with cache depending on how you use it and might be interested in looking towards a more robust caching technology if you only use asp caching.
Don't forget things like machine keys and validation in your web.config. The machineKeys need to be consistant across your servers.
Read up on IIS7 and you should be able to pick out several good examples to show off to your boss.
A web farm can give you opportunities and challenges with deployment that should not be overlooked.
Without specifc experience to the setup above but to general moves of this kind. I would recommend phasing the approach. That is, move to Windows 2008 first and then farm.
One additional thing to look at is your deployment plan. Deployment plans seem to be sadly overlooked and/or undervalued. Remember that you are deploying to multiple nodes and you want to take into account how you want to deploy and test in a logical fashion.
For example, assume you have four nodes in your farm. Do you pull two out of the cluster and update and test, then swapping out the other two to repeat? Determine if your current deployment process fits in with the answer you provide. Just because you have X times the amount of servers does not mean that you want or need to do X times the amount of work.
Just revisiting the caching part of the conversation for a moment. You should definitely take a look at a distributed caching solution. If you are pre-caching data and using callbacks with cache removals, you can really put a pounding on the database if you are not careful. Also, a lot of the distributed caching solutions offer some level of session state management, as well. I have been very much enjoying Microsoft's Velocity project, although it is just a second CTP release and not ready for production.
In addition to what others have said, you might want to consider looking into Richard Campbell's (of .NET Rocks!) product:
http://www.strangeloopnetworks.com/
We use the ASP.NET State Server for handling out sessions. This comes free with windows server 2003/2008.
We then have to make sure the machine key's are the same (a setting in your web.config files).
I then manually take each site offline (using app.offline or whatever the magic file is called). Alternatively, u can use IIS and just turn the site off and the offline site 'on'.
That's about it. You could worry about distributed caching, but that's pretty hard-core stuff. You can get a lot of good millage out of the default Output Caching with ASP.NET. I'd start there, before you delve into the complexity (and cost, for some products) if you're going to do distributed caching.
Oh, we're using an F5 load balancer that does NOT do sticky sessions, so we need to maintain our sessions .. which is why we're using the ASP.NET state server.
One other gotcha aside from the Session issues described by the other posters is if the apps are writing to the local file system. Scaling out to a web farm would break the apps if they assume the files are on the local PC. For example, uploaded files might be available or not depending on which server is hit. Changing the paths to point to a shared drive should fix this.

How do I cluster an upload folder with ASP.Net?

We have a situation where users are allowed to upload content, and then separately make some changes, then submit a form based on those changes.
This works fine in a single-server, non-failover environment, however we would like some sort of solution for sharing the files between servers that supports failover.
Has anyone run into this in the past? And what kind of solutions were you able to develop? Obviously persisting to the database is one option, but we'd prefer to avoid that.
At a former job we had a cluster of web servers with an F5 load balancer in front of them. We had a very similar problem in that our applications allowed users to upload content which might include photo's and such. These were legacy applications and we did not want to edit them to use a database and a SAN solution was too expensive for our situation.
We ended up using a file replication service on the two clustered servers. This ran as a service on both machines using an account that had network access to paths on the opposite server. When a file was uploaded, this backend service sync'd the data in the file system folders making it available to be served from either web server.
Two of the products we reviewed were ViceVersa and PeerSync. I think we ended up using PeerSync.
In our scenario, we have a separate file server that both of our front end app servers write to, that way you either server has access to the same sets of files.
The best solution for this is usually to provide the shared area on some form of SAN, which will be accessible from all servers and contain failover.
This also has the benefit that you don't have to provide sticky load balancing, the upload can be handled by one server, and the edit by another.
A shared SAN with failover is a great solution with a great (high) cost. Are there any similar solutions with failover at a reasonable cost? Perhaps something like DRBD for windows?
The problem with a simple shared filesystem is the lack of redundancy (what if the fileserver goes down)?

Resources