I'm putting together my deployment plan for a major deployment next week (basically taking over a site).
I've never had to deploy to multiple web servers before.
Do I need to copy the files to each web server, or is there a tool which will do this for me?
I have to supply the IP address to some 3rd party vendors, which IP do I give them since there are four separate servers?
Please check this thread, hope this will help you: What method do you use to deploy ASP.Net applications to the wild?
I would of expected that there would be a load balancer which would spread the traffic between the servers. In which case you would give out the IP address of the external interface of the load balanacer.
For updates in this scenario I would typically take one server out of the loop for the load balancer then update that server, test it works ok then if you have 4 servers take another out and update/test that server. Then switch the load balancer so that the 2 updated servers are live and the other 2 are offline update/test those servers and then put them back into the loop so they're live and your update is complete with no downtime. Of course I'd typically do this during a period of low traffic where possible.
Whether you do this using some sort of automatic script or manually would depend on what systems you have in place and how often you would expect to make updates.
It's worth saying that Microsoft have since released a couple of tools to help with this:
http://www.iis.net/download/webdeploy
http://www.iis.net/download/WebFarmFramework
Related
I'm writing a Qt/C++ application and i plan to add a network part with socket connection to a server implemented in Qt also.
If i host locally the server there is no real problem.
But if i want to share my application (client part) with some people and then be sure my server is always running, the best way would be to have a distant server.
Could you give me some clue to do it ? It's not still clear for me for steps to follow in this case.
Is it a better way for that ?
Can i find free hosting ?
Thanks a lot! :-)
There are generally 3 options:
1. Local hosting
This is server running at Your physical location. You can set it clearly as You want and the server will do whatever You want. But must be turned on the whole time, when there is no other work it will just consume power. Also You must get all the hardware (server components), software for running (Operation system), network device and connection (some router, which needs to have special set-up [NAT, port-forward, ...], speed and reachability of the internet connection) and most likely also some security device/SW (firewalls or so).
This is best idea for basic developement and testing. But once the service should work for public audience, it is not really worth to run server Yourself.
2. Remote hosting (virtualized or dedicated server)
This option was the top in last 20-30 years, where all the Web developers and App developers were putting their software on some prepared server. Dedicated is physical server running at some providers' location, who are lending You the hardware (and maybe some license for OS/other SW). Virtualized machine is just 1 hardware piece (server) with multiple virtual servers on it (more clients running on same hardware).
This got generally benefits as the networking/security/hardware issues are being carried out by the hosting owner. You are just borrowing some diskspace and computer time/performance. Normally the company will provide whole server, on which You can set up several services, run multiple protocols, etc..
Ideal solution for webs and single/few (not much) instances of server application(s).
3. Cloud hosting
This is the newest technology at the moment (alive around 10-15 years [eg. AWS running since 2006, Azure since 2010]). Datacenter owners (from 2. point) get better and created some applications on the servers, which will do all the work for You (mostly automatically). In few clicks the servers are running and application can be deployed, used database engines, web pages, IOT hubs, ... quite lot of stuff. Benefits are clearly that You just have to spent minimum of time to set up things and they will run. With high uptime (eg.: 99.9995%).
Difference between dedicated & cloud: On dedicated server there can be put almost any OS which fits the needs, run just services You want, have full control. In cloud solution, You don't have so much of "physical" control and the data moreless live somewhere in Datacenters all over the world. But generally it is more scalable solution and once Your app will be used by lot of users from public sector, this is best way to go.
Common ideology:
The most common solution is that when You develop, You create local server on which You deploy, test, improve. Once stable, order a server either on cloud or as dedicated/virtual machine and deploy it there. Some developers knows that their App will run on cloud services from the very beggining so they order it and start developing against it, but in most cases there is no need for that.
By reading documents on MSDN, I realized that it is recommended to create separate hosts by functionality (Sending hosts, Receiving hosts and Processing hosts). And if there is only one host in this bizTalk server, this host can perform all receiving, sending, and processing messages functionality.
My question is: Is it possible to have multiple hosts that each host can perform its own sending, receiving and processing function , and not affect each other?
This is for multiple developers working on the same project, because our current situation doesn't allow us to have a full set of SQL Server Database and SQL server for each developer or using VM.
Thanks a lot!
Multiple hosts is not a solution for letting multiple developers work on a single server. A single send/receive adapter can only be assigned to one host.
You will also run into other problems, as all the configuration settings are shared in a single database, a change from 1 developer will effect the others.
This same question was asked and answered at MSDN. What you are trying to do is not supported and will not work. There is no way around this.
You must deploy the same application code to each computer in a BizTalk Group.
Sharing a BizTalk computer for development work is not a workable or productive solution and will have a definite negative affect on productivity.
You are correct, the best way to handle DEV is a VM with the entire stack. This is the issue you must address in your environment.
I would like to write an application to manage files, directories and processes on hundreds of remote PCs. There are measurement programs running on these machines, which are currently managed manually using TightVNC / RealVNC. Since the number of machines is large (and increasing) there is a need for automatic management. The plan is that our operators would get a scriptable client application, from which they could send queries and commands to server applications running on each remote PC.
For the communication, I would like to use a TCP-based custom protocol, but it is administratively complicated and would take very long to open pinholes in every firewall in the way. Fortunately, there is a program with a built-in TinyWeb-based custom web server running on every remote PC, and port 80 is opened in every firewall. These web servers serve requests coming from a central server, by starting a CGI program, which loads and sends back parts of the log files of measurement programs.
So the plan is to write a CGI program, and communicate with it from the clients through HTTP (using GET and POST). Although (most of) the remote PCs are inside the corporate intranet, they are scattered all over the country, I would like to secure the communication. It would not be wise to send commands, which manipulate files and processes, in plain text. Unfortunately the program which contains the web server cannot be touched, so I cannot simply prepare it for HTTPS. I can only implement the security layer in the client and in the CGI program. What should I do?
I have read all similar questions in SO, but I am still not sure what to do in this specific situation. Thank you for your help.
There are several webshells but as far as I can see ( http://www-personal.umich.edu/~mressl/webshell/features.html ) they run on the top of an existing SSL/TLS layer.
There is also S-HTTP.
There are several ways of authenticating to an server (username/passwort) in a protected way, without SSL. http://www.switchonthecode.com/tutorials/secure-authentication-without-ssl-using-javascript . But these solutions are focused only on sending a username/password to the server.
Would it be possible to implement something like message-level security in SOAP/WS-Security? I realise this might be a bit heavy duty and complicated to implement, but at least it is
standardised
definitely secure
possibly supported by some libraries or frameworks you could use
suitable for HTTP
We have a web-application product that we sell to companies that is hosted at our servers.
The product contains couple of web applications, windows services and SQL server db.
Right now we have only one client that uses our product. We have two servers - one for the web apps and services and other for the db.
In order to add the product to another client, we have to 'duplicate' all the apps and db and run in separately.
As we started expanding and some companies will require more server power then others, I need to plan the servers infrastructure.
Having two servers for each client sounds ridiculous. Hosting costs will be huge. What will happen when I'll have 10 clients? And probably some servers will take more power than others, leaving servers using 30% from their capacity while others use 70%.
One thing I really care about is separating the DB from each product so in case of server compromise, only one db will be at risk.
So... I thought about Virtual Machines...
Does it sounds right?
Do I need two super servers to hold virtual machine instances? (one for web and other for db?)
What about Load balancing / etc..?
Will it require more maintenance time only because I use virtual machines?
Are there any hardware recommendations?
Any help will be appreciated
Many thanks
Virtual Machines is definitely the safest way to separate clients and will allow you the flexibility to allocate a specific percentage of resources to specific clients.
However, using separate processes on the same physical machine will perform better (but not always significantly) and will allow more dynamic use of resources (i.e., if one spikes, it will use the resources it needs). This setup will not allow you to control the resource allocation nearly as easily though. You'll also have to build your own monitoring tools to see and analyze what processes (clients) are using what resources (piggyback on perfmon).
Using separate processes also is dangerous if your application wasn't designed for this. Anywhere the application caches data on the file system or accesses anything besides memory and the database needs to be thoroughly scrubbed to make sure data from clients is not co-mingled or shared.
Separate virtual machines is more work to manage--each one is pretty much like it's own computer. So you have to manage all the VM's plus the physical machine.
You may also want to consider hosting in a more dynamic environment like Amazon AWS or Microsoft's Azure which will allow you to more easily scale up/down as necessary than a VM at a traditional host.
I am working on asp.net (newbie) and I am trying to understand what it means to do "load balancing" for the web site. The website will be used by multiple users and resources (database, web service,..).
If anyone could help me understanding the concept of the load balance for asp.net web site, I would really appreciate it.
Thanks.
One load-balancing-related issue you may want to be aware of at development time: where you store your session state. This MSDN article gives a good overview of your options.
If you implement your asp.net system using "out-of-process" or "sql-server-mode" session state management, that will give you some additional flexibliity later, if you decide to introduce a load-balancer to your deployed system:
Your load balancer needn't handle session affinity. As one poster mentioned above, all modern load-balancers handle it anyway, so this is a minor consideration in any case.
Web-gardens (a sort of IIS/server-implemented load-balancer) REQUIRES use of "out-of-process" or "sql-server-mode" session state management. So if your system is already configured that way, you'll be one step closer to being able to use web-gardens.
What is it?
Load balancing simply refers to distributing a workload between two or more computers. As a concept, it's not unique to asp.net. Although having separate machines for your database and web server could be called "load balancing" it more commonly refers to using multiple machines to serve a single role, such as having multiple web servers.
Should you worry about it? Probably not. Do you already have a performance problem? Are your database and web server on their own machines? If you do find that your server resources are strained, it would probably be easier to scale up (a more powerful single machine) than out (load balancing). These days, a dedicated box can handle a LOT of traffic if your code is decent.
Load Balancing, in the programming sense, does not apply to ASP.NET; it applies to a technique to try to distribute server load across two or more machines, rather than it all being used on one machine. Unless you will have many thousands (millions?) of users, you probably do not need to worry about it.
Check the Wikipedia article for more information.
Load balancing is not specific for any on technology stack be it asp.net, jsp etc. To load balance is to spread the incoming requests to a web site over more than one server. This is typically done with a software or hardware load balancer. The load balancer sits in front of two or more web servers and delegates the incoming traffic. Although this technique is not limited to web servers. Load Balancing
Enjoy!
I've never used it, but an option is IIS Application Request Routing.
IIS Application Request Routing (ARR)
2.0 enables Web server administrators, hosting providers, and Content
Delivery Networks (CDNs) to increase
Web application scalability and
reliability through rule-based
routing, client and host name
affinity, load balancing of HTTP
server requests, and distributed disk
caching
In a typical web server/database scenario, the db is almost always guaranteed to load up the machine first. This is because dealing with storing data requires more resources. Before you even start looking at load balancing your web server, you need to think about how to load balance the database.
Spreading one database across multiple servers is a lot harder than load balancing a web server. One of the techniques that can be used is sharding (or horizontal partitioning). This is where some records are stored on one server, and other records - on another server. For example records with ID 1-900000 are on server 1 and records 900001- are on server 2.
In comparison to DB load balancing, spreading the load across multiple ASP.NET servers is not overly complicated. Most of the session issues can be easily mitigated by using out of process session and/or never talking to Application.Cache directly. Data load balancing on the other hand is hard and requires a lot of planning and trial and error. In most cases, talking to a load balanced DB requires using an ORM which supports it (e.g. NHibernate) or your own Data Access Layer. The reason being is that you need to take out establishing a connection from the code that uses the database, so that the decision which DB to talk to is handled in one place.
the exact solution is to save session into the SQL Server with Stored Procedure. To read session call 'SessionCheck' stored Procedure.
I'd add that it really isn't something to worry about. By the time you need a load balancer, you can probably afford one of the neato newfangled ones with sticky sessions so you don't even have to deal with the session boogeyman.