I am designing an application for streaming video.I have developed a model in which a server wait for incoming request.The server it self is serving to a good number of clients and it can't afford to serve any more clients.Now when the new connection comes,the server chooses from among it's clients a candidate client who will serve the request of the incoming client.Now the thing is that this choice should be very intelligent.Now I am using various heuristic like bandwidth of the selected client,it's location,distance from the requesting client to come at a decision.Now my question is,IS THERE AVAILABLE ANY TOOL TO FIND OUT BANDWIDTH,LOCATION of a host,and DISTANCE(my be in hop number)?for hop number I can use traceroute but that will be too expensive as it take long time sending reply from every intermediate router.
Any help will be appreciated.
Thanks!
Use traceroute to find number of hops.
Use dnsstuff APIs to find location.
Do some TCP packet exchange to understand bandwidth of a client. You will get highest and lowest bandwidth client relatively.
If client is going to serve older video, take amount of data as consideration (i.e. Bigger the content, higher the change of streaming correct data)
Related
I have developed serve-client model based on UDP. Client are connected to server on random basis. I mean number of clients alive at a time is not fixed.
Any new client can communicate any time. It means, there could be 1 live client or 100 clients or any number of clients.
Now in such model, I need to add HTTP requests. Browser could send request to server and then server will forward that to any of client based on some identification.
Is there any method or readymade server(like nginix or lighttpd), which I can use for this requirement.
My big worry is that, destination client are not fixed, they keep changing. Most of server (nginix or lighttd) have static entries for destination address.
I visualize your scenario as multiple sensors that connect to the servers when they have something to say, and then they send a request and wait for the answer.
I visualize you also want to somehow administer such modules so that you want to access to them via HTTP.
You could leave the new configuration items on the regular server so that upon any update connection the response would include (in a piggy-backed fashion) the changes to the node.
Or the server could mark somehow your interest in accessing a certain node, and then, when this connects, the server could notify the interested client. The sensor should pay attention to clients wanting to connect to them during a window time.
Certainly, more information would help us help you.
I have several clients that constantly post data to a REST service. REST service is put behind a network load balancer. Each client sends 100 - 500 MB a day and I need to support 500+ clients.
I can POST either very large packets, this will reduce overhead for TCP/IP session set up and HTTP headers. This will, however, firmly tie one client to a particular server and limit my scalability options. Alternatively, I can send small HTTP packets, which I can load balance well, but I will get more overhead for TCP/IP session set up and HTTP headers.
What is the recommended packet size for HTTP POST? Or how can I calculate one for my environment?
There is no recommended size.
While HTTP POST size is not constrained by the RFCs, since HTTP is a commodity protocol implementing request / response type messaging, most of the infrastructure is configured around the idea that TCP connections are not particularly long lasting / does not carry significant amounts of data. i.e. there will be factors outside your control which may impact the service - although HTTP supports range requests for responses, there is no corollary for requests.
You can get around a lot of these (although not all) by using HTTPS. However you still need to think about how you detect/manage outages - are you happy to wait for a TCP timeout?
With 500+ clients presumably using the system quite heavily, the congestion avoidance limits shouldn't be a problem - whether TCP window scaling is likely to be an issue depends on how the system is used. HTTP handshakes should not be an issue unless you restrict the request size to something silly.
If the service is highly dependant on clients pushing lots of data on to your server, then I'd encourage you to look at parsing the data on the client (given the volume, presumably it's coming from files - implying a signed java applet or javascript with UniversalBrowserRead privilege) then sending it over a bi-directional communication channel (e.g. websocket).
Leaving that aside for now, the only way you can find out what the route between your clients and your server will support is to measure it - and monitor it. I would expect that a 2Mb upload size would work pretty much anywhere, while a 10Mb size would work most of the time within the US or Europe - and that you could probably increase this to 50Mb as long as there's no mobile clients.
But if you want to maintain the effectiveness of the service you'll need to monitor bandwidth, packet loss and lost connections.
I was considering doing a chat server using node.js/socket.io. Should I make it a tcp server or a http server? I'd imagine tcp server would be more efficient, but can you send other stuff to it like file attachments etc? If tcp is more efficient, how much more so? Also, just wondering how many concurrent connections can one node.js server handle? Is it more work to do TCP or HTTP?
You are talking about 2 totally different approaches here - TCP is a transport layer protocol and HTTP is an application layer protocol. HTTP (usually) operates over TCP, so whichever option you choose, it will still be operating over TCP.
The efficiency question is sort of a moot point, because you are talking about different OSI layers. If you went for raw TCP sockets, your solution would probably be more efficient - in bandwidth at least - since HTTP contains a whole bunch of extra data (the headers) that would likely be irrelevant to your purposes (depending on the scale of the chat program). What you are talking about developing there is your own application layer protocol.
You can send anything you like over TCP - after all HTTP can send attachments, and that operates over TCP. FTP also operates over TCP, and that is designed purely for transferring "attachments". In order to do this, you would need to write your protocol so that it was able to tell the remote party that the following data was a file, then send the file data, then tell the remote party that the transfer is complete. Implementations of this are many and varied (the HTTP approach is completely different from the FTP approach) and your options are pretty much infinite.
I don't know for sure about the node.js connection limit, but I can say with a fair amount of confidence that it is limited by the operating system. This might help you get to grips with the answer to that question.
It is debatable whether it is more work to do it with TCP or HTTP - it's a lot of work to do it in both. I would probably lean more toward the TCP option being your best bet. While TCP would require you to design a protocol rather than/as well as an application, HTTP is not particularly suited to live, 2-way applications like chat servers. There are many implementations of chat over HTTP that use AJAX, but I can tell you from painful experience that they are a complete pain in the rear-end.
I would say that you should only be looking at HTTP if you are intending the endpoint (i.e. the client) to be a browser. If you are going to write a desktop app for the endpoint, a direct TCP link would definitely be the way to go. The main reason for this is that HTTP works in a request-response manner, where the client sends a request to the server, and the server responds. Over TCP you can open a single TCP stream, that can be used for bi-directional communication. This means that the server can push an event to the client instantly, while over HTTP you have to wait for the client to send a request, so you can respond with an event. If you were intending to use a browser as the client, it will make the whole file transfer thing much more tricky (the sending at least).
There are ways to implement this over HTTP using long-polling and server push (read this) but it can be a real pain to implement.
If you are going to implement this on a LAN (or possibly even over the internet) it is worth considering UDP over TCP - in a chat application it is not usually absolutely mission critical that messages arrive in the right order, and even if it was, users would probably not be able to type faster than the variations in network latency (probably <100ms). Then for file transfers you could either negotiate a seperate TCP socket for the data exchange (like FTP), or implement some kind of UDP ACK system (like TFTP).
I feel there is a lot more to say on this subject but right now I can't put it into words - I may extend this answer at some point.
Chat servers are the Hello World program in node. Use http.
As far as the question of how many concurrent connections can it handle, that all depends on your system. Set up a simple chat server and then try benchmarking it.
Also, check out http://search.npmjs.org/ and search for chat for a few pointers.
When we talk about capacity of a web application, we often mention the concurrent requests it could handle.
As my another question discussed, Ethernet use TDM (Time Division Multiplexing) and no 2 signals could pass along the wire simultaneously. So if the web server is connected to the outside world through a Ethernet connection, there'll be literally no concurrent requests at all. All requests will come in one after another.
But if the web server is connected to the outside world through something like a wireless network card, I believe the multiple signals could arrive at the same time through the electro-magnetic wave. Only in this situation, there are real concurrent requests to talk about.
Am I right on this?
Thanks.
I imagine "concurrent requests" for a web application doesn't get down to the link level. It's more a question of the processing of a request by the application and how many requests arrive during that processing.
For example, if a request takes on average 2 seconds to fulfill (from receiving it at the web server to processing it through the application to sending back the response) then it could need to handle a lot of concurrent requests if it gets many requests per second.
The requests need to overlap and be handled concurrently, otherwise the queue of requests would just fill up indefinitely. This may seem like common sense, but for a lot of web applications it's a real concern because the flood of requests can bog down a resource for the application, such as a database. Thus, if the application has poor database interactions (overly complex procedures, poor indexing/optimization, a slow link to a database shared by many other applications, etc.) then that creates a bottleneck which limits the number of concurrent requests the application can handle, even though the application itself should be able to handle them.
.Imagining a http server listening at port 80, what happens is:
a client connects to the server to request some page; it is connecting from some origin IP address, using some origin local port.
the OS (actually the network stack) looks at the incoming request's destination IP (since the server may have more than one NIC) and destination port (80) and verifies that some application is registered to handle data on that port (the http server). The combination of 4 numbers (origin IP, origin port, destination IP, port 80) uniquely identifies a connection. If such a connection does not exists yet, a new one is added to the network stack's internal table and a connection request is passed on to the http server's listening socket. From now on the network stack just passes on data for that connection to the application.
Multiple client can be sending requests, for each one the above happens. So from the network perspective, all happens serially, since data arrives one packet at a time.
From the software perspective, the http server is listening to incoming requests. The number of requests it can have queued before the clients start getting errors is determined by the programmer based on the hardware capacity (this is the first bit of concurrency: there can be multiple requests waiting to be processed). For each one it will create a new socket (as fast as possible in order to continue emptying the request queue) and let the actual processing of the request be done by another part of the application (different threads). These processing routines will (ideally) spend most of their time waiting for data to arrive and react (ideally) quickly to it.
Since usually the processing of data is many times faster than the network I/O, the server can handle many requests while processing network traffic, even if the hardware consist of only one processor. Multiple processors increase this capability. So from the software perspective all happens concurrently.
How the actual processing of the data is implemented is where the key to performance lies (you want it to be as efficient as possible). Several possibilities exist (async socket operations as provided by the Socket class, threadpool, unique threads, the new parallel features from .NET 4).
It's true that no two packets can arrive at the exact same time (unless multiple network cards are in use per Gabe's comment). However, web request usually requires a number of packets. The arrival of these packages is interspersed when multiple requests are coming in at near the same time (whether using wired or wireless access). Also, the processing of these requests can overlap.
Add multi-threading (or multiple processors / cores) to the picture, and you can see how lengthy operations such as reading from a database (which requires a lot of waiting around for a response) can easily overlap even though the individual packets are arriving in a serial fashion.
Edit: Added note above to incorporate Gabe's feedback.
is it possible to find out the connection speed of the client when it requests a page on my website.
i want to serve video files but depending on how fast the clients network is i would like to serve higher or lower quality videos. google analytics is showing me the clients connection types, how can i find out what kind of network the visitor is connected to?
thx
No, there's no feasible way to detect this server-side short of monitoring the network stream's send buffer while streaming something. If you can switch quality mid-stream, this is a viable approach because if the user's Internet connection suddenly gets burdened by a download you could detect this and switch to a lower-quality stream.
But if you just want to detect the speed initially, you'd be better off doing this detection on the client and sending the results to the server with the video request.
Assign each request a token /videos/data.flv?token=uuid123, and calculate amount of data your webserver sends for this token per second (possible check for multiple tokens at one username at a time period). You can do this with Apache sources and APR.