The goal
Allow a browser to exchange information with a service running locally. Allow the service to figure out the user (logon session in Windows) who runs the browser. Avoid, if possible, storing a TLS certificate and private key on the machine. A bonus task: provide a solution for the setup where an anti-virus software like Kaspersky or Sophos proxies all TCP connections.
The story
The underlying OS is Windows, but can be any modern OS. There is a daemon running in the system. In case of Windows this is a Windows service. There is a JavaScript loaded by an Internet browser from a remote server which sends data to the daemon. The daemon does not have an HTTP/HTTP server. Instead the daemon opens N ports and listens for incoming connection. The N is a low two digits number.
The JS initiates TCP connections to a selected group of ports K in the range N. In the current implementation JS attempts to load JS scripts from 127.0.0.1:port-number. The daemon accepts the connection and immediately closes it (kinda port knocking). The daemon recovers the data from the ports "knocked" by the JS.
In the current implementation the backend chooses a unique tuple of ports, for example a 3 ports combination. The tuple is a key identifying the browser session. The service collects "knocks" - the ports accessed by a specific OS process. The service queries the backend using the collected ports.
One of the goals of the solution is to avoid implementation of HTTP/HTTPS server in the service and save maintenance of a SSL certificate.
The problem
The order in which JS connects to the ports is not defined. Specifically two browsers can run knocking sessions simultaneously.
The service can fail to open some of the ports in the range N because the
ports are busy.
The order is not critical because the server chooses a unique combination from the range N. I need the system to tolerate missing ports. I was thinking about choosing more than one tuple and using more than one range N.
The question
How can I adopt FEC for the problem? Does the design make sense?
I have RealTime asterisk with 3 servers. In database I hold sippears only and voicemail boxes. Voicemail messages are stored on the system FILE_STORAGE.
Server A and B are for calls and sip registrations and Server C is dundi.
Currently everything work fine.. I can call from Server A to Server B. The problem is when I leave message to number who is busy and registered on Server B.. then this number disconnect and register on Server A -> he can't listen the messages because it is stored on Server B..
How can I make any user to be able to listen his messages no matter on which server are?
You have alot of options, most of each in clustering area.
Simplest options are:
Glusterfs setup on both server, voicemail in glusterfs directory. This one do failover
NFS/samba share on both servers.
mysql master-master replication, use ODBC_STORAGE, put all voicemails in db. This one is recommended if you also want easy access from web interface to your voice files and simple search/lookup/get message. Highly recommended use innodb tables and optimized mysql config.
Easiest way just for to be able to listen them no matter on which of both servers user is registered is NFS and mounting for example /var/spool/asterisk/. In this case you need to install some additional components.
Here is great tutorial how can you done this:
How to configure an NFS server and mount NFS shares - Ubuntu
Another way if you can make master-slave with two servers in cluster and using rsync . Then you can sync every X minutes/hours/days folder to remote server to keep them in case of failure.
rsync -a local_dir/ user#remote-host-ip:/path/to/dir
Assume you have one box (dedicated server) that's on 24 7 and several other boxes that are user machines that have unused bandwidth. Assume you want to host several web pages. How can the dedicated server redirect http traffic to the user machines. It is desirable that the address field in the web browser still displays the right address, and not an ip. Ie. I don't want to redirect to another web page, I want to tell the web browser that it should request the same web page from a different server. I have been browsing through the 3xx codes, and I don't think they are made for anything like this.
It should work some what along these lines:
1. Dedicated server is online all the time.
2. User machine starts and tells the dedicated server that it's online.
(several other user machines can do similarly)
3. Web browser looks up domain name and finds out that it points to dedicated server.
4. Web browser requests page.
5. Dedicated server tells web browser to repeat request to user machine
Is it possible to use some kind of redirect, and preferably tell the browser to keep sending further requests to user machine. The user machine can close down at almost any point of time, but it is assumed that the user machine will wait for ongoing transactions to finish, no closing the server program in the middle of a get or something.
What you want is called a Proxy server or load balancer that would sit in front of your web server.
The web browser would always talk to the load balancer, and the load balancer would forward the request to one of several back-end servers. No redirect is needed on the client side, as the client always thinks it is just talking to the load balancer.
ETA:
Looking at your various comments and re-reading the question, I think I misunderstood what you wanted to do. I was thinking that all the machines serving content would be on the same network, but now I see that you are looking for something more like a p2p web server setup.
If that's the case, using DNS and HTTP 30x redirects would probably be what you need. It would probably look something like this:
Your "master" server would serve as an entry point for the app, and would have a well known host name, e.g. "www.myapp.com".
Whenever a new "user" machine came online, it would register itself with the master server and a the master server would create or update a DNS entry for that user machine, e.g. "user123.myapp.com".
If a request came to the master server for a given page, e.g. "www.myapp.com/index.htm", it would do a 302 redirect to one of the user machines based on whatever DNS entry it had created for that machine - e.g. redirect them to "user123.myapp.com/index.htm".
Some problems I see with this approach:
First, Once a user gets redirected to a user machine, if the user machine went offline it would seem like the app was dead. You could avoid this by having all the links on every page specifically point to "www.myapp.com" instead of using relative links, but then every single request has to be routed through the "master server" which would be relatively inefficient.
You could potentially solve this by changing the DNS entry for a user machine when it goes offline to point back to the master server, but that wouldn't work without an extremely short TTL.
Another issue you'll have is tracking sessions. You probably wouldn't be able to use sessions very effectively with this setup without a shared session state server of some sort accessible by all the user machines. Although cookies should still work.
In networking, load balancing is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch or a DNS server).
and more interesting stuff in here
apart from load balancing you will need to set up more or less similar environment on the "users machines"
This sounds like 1 part proxy, 1 part load balancer, and about 100 parts disaster.
If I had to guess, I'd say you're trying to build some type of relatively anonymous torrent... But I may be wrong. If I'm right, HTTP is entirely the wrong protocol for something like this.
You could use dns, off the top of my head, you could setup a hostname for each machine that is going to serve users:
www in A xxx.xxx.xxx.xxx # ip address of machine 1
www in A xxx.xxx.xxx.xxx # ip address of machine 2
www in A xxx.xxx.xxx.xxx # ip address of machine 3
Then as others come online, you could add then to the dns entries:
www in A xxx.xxx.xxx.xxx # ip address of machine 4
Only problem is you'll have to lower the time to live (TTL) entry for each record down to make it smaller (I think the default is 86400 - 1 day)
If a machine does down, you'll have to remove the dns entry, though I do think this is the least intensive way of adding capacity to any website. Jeff Attwood has more info here: is round robin dns good enough?
Can a State server in ASP.Net be made fault tolerant? By that I mean is when one state server goes down, ASP.Net applications can switch to another state server.
I do not want to go to a Database based state management as that seems considerably slower than the State Server.
You need to configure two different servers in fail over cluster i.e. if one server goes down due to some issue, other server will get up. For details see:-
http://technet.microsoft.com/en-us/library/cc731844%28WS.10%29.aspx
When you configure your servers in fail over mode then a virtual IP is given to you which you will use as your state server's IP.
Also have a look at peer to peer state server as well:-
http://www.codeproject.com/KB/aspnet/p2pstateserver.aspx
This is a fairly basic question about state server but assume there are 2 servers behind a load balancer. How do I configure the session state server?
So, I have machine1 and machine2. I would assume that I would need to install the state server on 1 machine only and then use the internal IP to refer to that machine. Is this correct? As oppose too, I would not install state server on both machines.
In your scenario (and most webfarm scenarios), a single state server is right.
You could refer to it by the internal IP or setup a DNS entry for the IP on the internal network and refer to it using that.
Single state server is mad as you have no fault tolerance, if it does down its game over. You need a distibuted state server stored on both servers.