My question is on ASP.net session management. In the current web application we have "sticky sessions" (user is always redirected to server it started talking to). Below is my problem statement.
From one of our client there are huge number of request hitting our servers. Somehow requests are sent from 1 or at most 2 IPs. We have 5 servers running to serve those request. Now the problem here is that 1-2 server might be heavily getting hits while other servers might be idle because sticky sessions will not allow request to be processed by serverB which was initially answered by serverA
What we need is exactly the opposite. Any server should be able to process incoming request maintaining the continued conversation.
I have put my problem in very plain words. Any pointer will be appreciated.
Why not just store the session state on a SQL server?
Related
So I'm having reverse proxy server, where Nginx working as proxy server and loadbalancer. My biggest problem, that I have 2 app backends, which I need sometimes to shutdown. When I write after server down the backend, shutdown and looses sessions. How can I gracefully shutdown one of my app server? So that Nginx wait while all sessions will be completed or for some time?
My simple config:
upstream loadbalancer {
ip_hash;
server 192.168.0.1:443;
server 192.168.0.2:443;
}
Ok the issue is that each server has it's own session manager, and when the server is dead the session data is lost with that server, a good solution is to make a centralized session storage, for example the same server which is load balancing, and the other 2 servers connect to it to get the session data, if one server is down, and the other server tries to serve the connection that was being served by the other server then the data will still be found because the data is stored elsewhere, common methods to do so is using memcached as session storage.
As for the pros, you can add and remove as much app servers as you want and the users won't even notice any change.
But for the cons, if that single server dies, all session data is lost, because the data is centralized.
You haven't really tagged your question with what language you are using, but if you search for it on google you'll easily find useful posts to help you.
Is it possible to configure IIS in such a way that it can handle multiple HTTP requests that arrive on the same TCP socket in HTTP pipelining mode in parallel?
We have a problem where multiple requests are done by a web client in a single TCP socket, using HTTP pipelining. The client basically sends let's say 10 requests at once, and then the server sends 10 responses (in the same order as the requests). Our server takes quite some time for each request, mostly waiting for external IO. It would be much more efficient if IIS could start to work on all 10 requests in parallel, then serialize the responses in the correct order back to the client. Obviously, the server would need some way to cache responses if e.g. response 3 is available earlier than response 2.
Is that possible somehow? Maybe this is not possible in IIS, or I'm just searching for the wrong keywords... We are running IIS 7.5 and ASP.NET 4.5 on Windows Server 2008 R2.
We came across the same issue in IIS 7.5.
Our solution was to enable "Web Garden"... and it really really works well! It's just that you can't have a "session" based web site. So if you have clients "logging in", you will have to re-configure the process. (We used cookies to store an encrypted token - anyway that's besides the point).
Go to:
Internet Information Service > Applications Pools
Select the Pool being used (you should have a pool per site)
Click Advanced Settings...
Find "Maximum Worker Processes" and crank that sucker!
The amount of processes that you push it up to now depends entirely on how much RAM your system has. You can of course monitor and control this your self.
With a "Web Garden" enabled, you will notice (with Process Explorer or something similar), IIS will spawn a new instance of w3wp.exe for each request, up to the max number you specified. New requests simply get processed by the next available Worker Process available, enabling true IIS parallel request processing. If two requests come in within moments of each other, and request 2 is completed before request 1, request 2 is sends its response.
IIS uses the HTTP server api (that uses HTTP.sys); so I did a simple test -
wrote an HTTP server using this API,
wrote a Winsock client that opens a connection and sends 2 http requests
I observed that if I called HttpReceiveHttpRequest twice on the server (without sending the response for the first request), it doesn't receive the second request (basically, the second call blocks). This holds true for both PUT and GET requests.
It appears that HTTP.sys is in fact serializing requests to IIS on a single connection; I couldn't find any configuration on HTTP.sys that might modify this behavior.
As you can see while the requests from all users all over the web are just being added to the queue, and building up and up (Green) - only 1 single Request is Executing (Blue).
This doesn't really answer the question - but its an beautiful illustration of this disastrous situation.
My webapp is deployed in a cluster of multiple JBoss instances. There is an admin page in the webapp to perform certain Jboss instance-specific operations.
The problem is that requests are sent to a load balancer instead of directly hitting specific individual instance.
Is there any way to direct request to a specific instance? Or at least when the admin page is up, all subsequent requests (Ajax) will stick to the original instance that serves the page at the beginning.
I don't think HttpSession is going to help here. I need to target specific instance and not maintaining the state of individual client.
Thanks.
You were looking for how to configure for Sticky sessions.
Send all requests in a user session consistently to the same backend server known as persistence or stickiness. A significant downside to this technique is its lack of automatic failover: if a backend server goes down, its per-session information becomes inaccessible, and any sessions depending on it are lost. The same problem is usually relevant to central database servers; even if web servers are "stateless" and not "sticky".
Assignment to a particular server might be based on a username, client IP address, or by random assignment. While there are advantages and disadvantages to the approaches.
I would suggest to please go through below article in configuring JBoss under a cluster rather going in deep understanding unless and until you would want to know in deep.
http://docs.jboss.org/jbossas/docs/Clustering_Guide/beta422/html/clustering-http-nodes.html
https://community.jboss.org/wiki/HTTPLoadbalancer
Assume you have one box (dedicated server) that's on 24 7 and several other boxes that are user machines that have unused bandwidth. Assume you want to host several web pages. How can the dedicated server redirect http traffic to the user machines. It is desirable that the address field in the web browser still displays the right address, and not an ip. Ie. I don't want to redirect to another web page, I want to tell the web browser that it should request the same web page from a different server. I have been browsing through the 3xx codes, and I don't think they are made for anything like this.
It should work some what along these lines:
1. Dedicated server is online all the time.
2. User machine starts and tells the dedicated server that it's online.
(several other user machines can do similarly)
3. Web browser looks up domain name and finds out that it points to dedicated server.
4. Web browser requests page.
5. Dedicated server tells web browser to repeat request to user machine
Is it possible to use some kind of redirect, and preferably tell the browser to keep sending further requests to user machine. The user machine can close down at almost any point of time, but it is assumed that the user machine will wait for ongoing transactions to finish, no closing the server program in the middle of a get or something.
What you want is called a Proxy server or load balancer that would sit in front of your web server.
The web browser would always talk to the load balancer, and the load balancer would forward the request to one of several back-end servers. No redirect is needed on the client side, as the client always thinks it is just talking to the load balancer.
ETA:
Looking at your various comments and re-reading the question, I think I misunderstood what you wanted to do. I was thinking that all the machines serving content would be on the same network, but now I see that you are looking for something more like a p2p web server setup.
If that's the case, using DNS and HTTP 30x redirects would probably be what you need. It would probably look something like this:
Your "master" server would serve as an entry point for the app, and would have a well known host name, e.g. "www.myapp.com".
Whenever a new "user" machine came online, it would register itself with the master server and a the master server would create or update a DNS entry for that user machine, e.g. "user123.myapp.com".
If a request came to the master server for a given page, e.g. "www.myapp.com/index.htm", it would do a 302 redirect to one of the user machines based on whatever DNS entry it had created for that machine - e.g. redirect them to "user123.myapp.com/index.htm".
Some problems I see with this approach:
First, Once a user gets redirected to a user machine, if the user machine went offline it would seem like the app was dead. You could avoid this by having all the links on every page specifically point to "www.myapp.com" instead of using relative links, but then every single request has to be routed through the "master server" which would be relatively inefficient.
You could potentially solve this by changing the DNS entry for a user machine when it goes offline to point back to the master server, but that wouldn't work without an extremely short TTL.
Another issue you'll have is tracking sessions. You probably wouldn't be able to use sessions very effectively with this setup without a shared session state server of some sort accessible by all the user machines. Although cookies should still work.
In networking, load balancing is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch or a DNS server).
and more interesting stuff in here
apart from load balancing you will need to set up more or less similar environment on the "users machines"
This sounds like 1 part proxy, 1 part load balancer, and about 100 parts disaster.
If I had to guess, I'd say you're trying to build some type of relatively anonymous torrent... But I may be wrong. If I'm right, HTTP is entirely the wrong protocol for something like this.
You could use dns, off the top of my head, you could setup a hostname for each machine that is going to serve users:
www in A xxx.xxx.xxx.xxx # ip address of machine 1
www in A xxx.xxx.xxx.xxx # ip address of machine 2
www in A xxx.xxx.xxx.xxx # ip address of machine 3
Then as others come online, you could add then to the dns entries:
www in A xxx.xxx.xxx.xxx # ip address of machine 4
Only problem is you'll have to lower the time to live (TTL) entry for each record down to make it smaller (I think the default is 86400 - 1 day)
If a machine does down, you'll have to remove the dns entry, though I do think this is the least intensive way of adding capacity to any website. Jeff Attwood has more info here: is round robin dns good enough?
Users connect to our webserver via https, and stay on a secured connection throughout their use of our service. A typical user session will establish a small handful of connections to the server (one or two).
There are a very small number of exceptions we are trying to track down. Particular users will intermittently have handfuls of hundreds of connections established. When we happen to catch the problem in the act, we can see the exchange of the SSL handshake, and from the perspective of the server, all appears to be in order. Yet we never observe a payload - the client instead connects on a new port and initiates a new handshake.
We do not have access to the client, and cannot observe the behavior from that side of the connection. Nor do we have a local scenario that can reproduce the problem.
It is our belief (though not confirmed) that the user agent is connecting to our server directly, and not through a proxy.
Does anybody recognize these symptoms? Can anyone suggest steps to further identify the problem?
Are there any patterns you can see to this traffic, aside from making many repeated requests?
For example, do the requests come from the same IP ranges? Possibly search engines or other spiders, or maybe from countries that you normally don't get users from, possibly indicating some sort of weird botnet or at least something you could block?
Do these rogue requests always negotiate to use a particular cipher suite, potentially indicating the client software?
Does it make any difference if you change the available cipher suites available for negotiation?
What server software are you using, and are there any firewalls within your network that could potentially be dropping some responses to the user?
i've seen a botnet flooding https sites being mentoned.
this is probably not your situation, but i thought i mention it.
i'm seeing chrome (12.0.742.60 beta) flooding my server with https connections, some half a dozen or more connections for a single static picture being served... as if it had an optimization to build up connections with ready https handshakes waiting for requests to send, and then after the page (file) has been served it closes them all.
on plain http i see only two connections (one extra for favicon.ico).