Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to get a deeper understanding of how IIS works.
http.sys i understand is one its major components. However, i have been having trouble finding easily digestible information about it. I couldn't get a good mental model going until i heard about the WSK, then i think it all fell into place.
From a lot of random googling a little experimentation this is my current high level understanding of why it exists and how it does it's stuff.
Why:
Port sharing, and higher performance caching.
How:
User mode processes use the WinSock api to open a socket listening on a port to gain access to the networking subsystem, e.g. tcp/ip. Kernal mode software like the http.sys driver uses Winsock Kernal Sockets (WSK) api to achieve the same end using the same pool of TCP port numbers as the WinSock api.
IIS, a web service or anything that wants to use http registers itself with http.sys using a unique url/port combination. http.sys opens up a socket on this port using WSK (if it hasn't already for another url/port combination with the same port) and listens.
When the transport layer (tcpip.sys) has reconstructed a load of ip packets back into an http request that a client sent it gives it to http.sys via the port in the request. Http.sys uses the url/port number to send it the the appropriate process which parses it however it pleases.
I know it seems like I'm answering my own question but I'm really not that sure of myself on this and would like some closure so i can get on with more interesting things.
Am i close?
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Firstly, thank you for taking the time out to read this post.
I'm looking to develope a TCP/IP enabled device using the Microchip PIC18 or PIC32 family of embedded microcontrollers with Microchip's TCP/IP Stack. However, my knowledge of networking is pretty basic at the moment, thus the reason for this post.
Can anyone recommend the best protocol to use for my TPC/IP embedded device so that it can communicate with a server in a data centre? My intention is to have the embedded device located at a remote location somewhere over the internet, where the server can communicate with the device and download data such as thermometer probe readings to be stored in a database. I would also like the ability for the server in the data centre to be able to reconfigure settings and variables on the remote device should I need to.
My research on protocols so far has lead me to the following options:
SNMP v3 (version 3 due to encryption and authentication)
UDP (though I read this can be unreliable but is fast)
TCP (I'm not too clued up on this yet)
Can anyone offer me advice on the best route to go down? I'm not expecting a detailed answer from you, but I would really like an idea of what topics/protocols to look into and research.
My intent is to deploy many of these embedded devices over the internet where they all send their data back to the server.
I assume that the remote embedded device will have to connect to the server rather than vice versa as the server will have a static IP address or DNS name, whereas the remote device addresses will be unknown.
Any advice on this would be greatly appreciate. Please don't hesitate to ask if I've missed out any key information in this post.
Many thanks.
Rob
* UPDATE *
It was pointed out that I'm probably misusing the term Web Server, so I've amended my post to mention Server in a Data Centre instead. Thank you for pointing this out to me.
If the target is a Web server you don't have any choice. You have to use HTTP, which runs over TCP.
Or else you are misusing the term 'Web server'.
In many ways this depends on your specific requirements. TCP/IP is able to provide quite reliable connections because it provides a means to determine if the client is connected, when they connected and when they disconnected. UDP is connectionless, so the server opens a port and listens for data, but has no automatic connection management, so clients need to explicitly 'tell' the server when they have arrived or are going (this also means you will need to make your own timeout facility).
Also, if you have very limited memory/processing resources, it is worth bearing in mind that UDP is a less 'costly' protocol as it avoids a lot of the overheads TCP incurs due to its inbuilt connection management.
While these are all protocols, they really just handle the connections themselves. You will probably still need to create your own protocol for the management of the data itself. For instance, when you send data over either TCP or UDP, the bytes you send may not all arrive to the server at the same time. This means you need a way of validating each packet you receive to ensure you have it all. This is often achieved with a combination of a checksum and a byte representing the total size of the data sent.
You might also consider mqtt (http://mqtt.org). It is a lightweight messaging protocol. For encoding your messages, you might consider protobuf (https://code.google.com/p/protobuf/)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I will be moving a high load prod system over to new hardware over the next few weeks. However in the mean time I would like to validate that the new hardware will handle the expected loads. I would really like to stick some kind of 'proxy' infront of the current web server and copy all that http traffic to the new environment, i.e. run them both in parallel.
Ideally this proxy would also validate that the responses are the same.
I can then monitor the new hardware stats (cpu, mem, etc) and see if it looks ok.
What is this kind of proxy called? Any one have any suggestions? This is for a Windows .Net (asp.net) and SQL server environment.
Thanks all
Varnish comes to mind - https://www.varnish-cache.org/
Edit
I'd actually use nginx... (two years experience after answering this question).. varnish would be silly to use. nginx would definitely be the better option.
Have a look a JMeter. It's Java based but allows you to record user journeys and play them back in bulk for stress testing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to get down to the details of what happens once a server gets a request from a client...
Open a socket on the port specified by the request...
Then access the asset or resource?
What if the resource refers to a cgi/script?
What "layers" does the request info have to pass through?
How is the response generated?
I've looked up info on "how the internet works", and "request response cycle", but I'm looking for details as to what happens inside the server.
It seems like you're having a little trouble separating out the different parts of your question so I'll do my best to help you out with that.
First and foremost, a common method for understanding communication between two computers is described using what is called the OSI model. This model attempts to distinguish the responsibilities between each protocol in a protocol stack. For example, when you surf a website on your home network the protocol stack is most likely something like
Ethernet-IPv4-TCP-HTTP
This modularization of protocols is used to create a separation of concerns so that developers don't have to "reinvent the wheel" each time they try to get two computers to communicate in some way. If you're trying to write a chat program you don't want to worry about packet loss or internet routing methodologies so you go ahead and take advantage of the lower level protocols that already exist and handle more of the nitty gritty stuff for you.
When people refer to socket communication these days they're typically using TCP or UDP. These are both known as transport protocols. If you'd like to learn more of the fine details on socket communication I would start with UDP because it's a simpler protocol and then move on to TCP.
While your web server is aware of some information in the lower level protocols it doesn't really do much with it. Primarily that's all handled by the operating system libraries which eventually hand the web server some raw HTTP data which the web server then begins to process.
To add another layer, HTTP has nothing to do with the gateway language running behind the scenes. This is fairly obvious due to the fact that the protocol is the same whether the web server is serving CGI perl scripts, PHP, ASP.Net or static HTML files. HTTP simply makes the request and the webserver processes the request accordingly.
Hopefully this clarifies a few concepts for you and gives you a better idea what you're trying to understand.
It depends on the server. An apache 2 server could do any amount of request rewriting, automatic responses (301, 303, 307, 403, 404, 500) based on rules, starting a CGI script, exchanging data with a FastCGI script, passing some data to a script module like mod_php, and so on. The CouchDB web server would do something else entirely.
Basically, aside from parsing the request and sending back the appropriate response, there's no real common aspect to web servers.
You could try looking into the documentation of the various web servers: Apache, IIS, lighttpd, nginx...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Think of the following services on one box:
SOCKS proxy
HTTP proxy
SSH service
VPN service
I have found a case where it would be beneficial to run all of these services on the same box (save on high server costs with low usage), but they all need to listen on port 80 (network security restrictions require it).
I'm a proficient Java developer. What I am brainstorming is whether it's realistic to consider a simple Java app listening on port 80, determining which service a new connection is bound for, and then redirecting traffic from that connection to a local port where the service is listening.
Is there something in the initial packets after the connection that I would be able to queue off of to determine the appropriate service?
Creative thoughts are most welcome.
I don't know the structure of all of those protocols, but I would think that the easiest way to find the answer to your question would be to simply write a program that listens on port 80 and writes the initial data to log files, and then connect with each of the above protocols and see if there are obvious patterns.
Running a network analyser on either the server or the client like WireShark would also work, and you don't have to write any code.
Once you know the patterns, you probably should look up the protocol documentation to verify whether it is really reliable.
I agree with Luke's answer, and I think that such a creature is within the realm of possibility. Other factors to consider:
If the server receives heavy traffic, there may be some performance impact to running this java redirection service, especially if your heuristics for determining the appropriate destination service are complex.
For the HTTP service, you may want the java redirector to issue something like a 301 Moved Permanently to the new port.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am trying my hands understanding PCAP libraries.
I am able to apply a filter and get the TCP payload at port 80. But what next ? How can I read the HTTP data - suppose I want to know the "User Agent" field value in the http header..how should I proceed ?
I have searched the website (and googled a lot too), and could find a related thread here :
writing a http sniffer. But this doesn't help me anywhere...
Thanks !!
First, you should know that PCAP give you packets, and will not reconstruct the TCP stream so you won't be able to read full HTTP TCP streams without first reconstructing the data.
Assuming all the data is available in one packet try and look at my answer for a similar question. All you need to do different is to parse the HTTP header and get the user agent.
If you don't limit yourself to C, and if you can use Windows, you can write a .NET application and use Pcap.Net to parse Ethernet, IPv4 and TCP perfectly.
Why don't you use a Wireshark Dissector?
There is already a good Pcap wrapper for .net called Pcap.Net - here it is
"Pcap.Net is a .NET wrapper for
WinPcap written in C++/CLI and C#. It
Features almost all WinPcap features
and includes a packet interpretation
framework."