Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
As I understand, Flow Control as well as Error Control is employed both at Transport and Data link layer. If data link guarantees the error-free delivery of the packets, then what kind of errors are caught by the Transport layer?
Also, what kind of errors may happen in UDP which are handled in TCP?
The Data Link Layer checks for errors when a packet moves from one machine to the next. By machine I mean a router, a packet switch or an end host(computer, phone, tablet) itself. Whereas, the transport layer only checks for errors between the end hosts.
Error checking is provided in the transport layer mainly because of the following two reasons:
Even if no errors are introduced when a segment is moving over a
link, it's possible for errors to be introduced when a segment is
stored in a router's memory(for queuing). The data link layer's error checking fails in this scenario.
There is no guarantee that all the links between source and
destination provide error checking. One of the links may be using a
link layer protocol which doesn't provide error checking.
As to your second question, UDP also checks for errors. It doesn't usually do anything about them, though. Sometimes, it delivers the data to the application layer and informs it that the data is corrupt. Other times it simply discards the packet.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
I read golang's source code recently, and find a place that doesn't make sence.
if t.MaxIdleConns != 0 && t.idleLRU.len() > t.MaxIdleConns {
oldest := t.idleLRU.removeOldest()
oldest.close(errTooManyIdle)
t.removeIdleConnLocked(oldest)
}
the code is here : https://github.com/golang/go/blob/96c8cc7fea94dca8c9e23d9653157e960f2ff472/src/net/http/transport.go#L984-L988
question:
when idleConn's length > t.MaxIdleConns, why not close the conn directly instead of put the conn into a new list. at the same time, remove the oldest conn in the list. As I understand it, it's simple to close the redundant connection, and it turns out to be corrent.
I'd speculate that the reason is that the server (the remote) is not by any means oblidged to keep an idle connection open, and even if it decides to do so, it's not oblidged to keep it open for as long as the client requested.
Another possible problem is that idling TCP sessions can be proactively terminated by indermediate network nodes (for instance, NAT devices usually have to maintain certain state for each TCP session they route and so they are "interested" in getting rid of the sessions they deem to be dead); TCP keepalives may help but they should be enabled—which is not always the case,—and their "pings" should be frequent enough for an intermediate node to consider such session alive, which, again, is also not always the case.
These issues may easily lead for a session idling for some time to be closed either by the server or by an intermediate network node.
Add to that the fact that basically net/http has no simple means to know that a particular idle connection has really been closed and became unusable in the time frame since the last HTTP session has been performed on it and until net/http pulled that connection for carrying out another session.
If you'll read the code a bit further, you'll find out that net/http tries to use an idle connection and if it finds it's dead, it'll pick the next one, and will repeat that until one works or it will be forced to establish a new connection.
Hence a more recent connection simply has greater chance of being ready to serve another HTTP session.
Add to that another line of reasoning, somewhat related to the presented above: the server might have a limit on the number of simultaneously active connections—total, or per (source) host, or both,—and in production setups it usually has. By this logic, if a connection to it succeeds, it may mean that some old connection beleived by the client to be alive albeit idle, had already be closed by the server—so it were able to create a new connection while staying within its configured limits of the number of active connections. Of source, such logic is just a speculation, but not exactly unfounded.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Given the degree of uselessness one has come to expect from both tutorials
https://inet.omnetpp.org/docs/tutorials/wireless/doc/step5.html
and manual pages:
https://doc.omnetpp.org/omnetpp/manual/#sec:ned-lang:warmup:network
how can collision be modelled at the application layer?
You did not find a tutorial how can collision be modelled at the application layer simply because in application layer collisions do not occur.
Generally, a collision may occur when some medium (or layer) cannot be accessed simultaneously by many elements. However, there is no such limitation for application layer. Application may send a packet in any time, that packed will be processed by the transport layer (TCP or UDP) and then it is sent to network layer. The network layer has a buffer so in the situation when at the same time two or more application send packets the conflict will not occur.
According the details presented in your question:
how can hostSink check whether hostA or hostB are still sending packets [originally: signals]? Answer: hostSink cannot determine whether hostA is still sending packets. Simulation reflects the behavior of a real network and in real network host does not know whether the another host is still sending packets.
How does time "pass" in a simulation? Answer: OMNeT++ is Discrete Event Simulator and according to Simulation Manual:
A discrete event system is a system where state changes (events) happen at discrete instances in time, and events take zero time to happen.
It means that a simulation internally maintains variable called currentSimtime. At the beginning currentSimtime=0. When the first event (for example sending an ARP packet) is scheduled at, for example, t=0.003s, currentSimtime is set to 0.003s and the sending ARP packet is executed.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to get a deeper understanding of how IIS works.
http.sys i understand is one its major components. However, i have been having trouble finding easily digestible information about it. I couldn't get a good mental model going until i heard about the WSK, then i think it all fell into place.
From a lot of random googling a little experimentation this is my current high level understanding of why it exists and how it does it's stuff.
Why:
Port sharing, and higher performance caching.
How:
User mode processes use the WinSock api to open a socket listening on a port to gain access to the networking subsystem, e.g. tcp/ip. Kernal mode software like the http.sys driver uses Winsock Kernal Sockets (WSK) api to achieve the same end using the same pool of TCP port numbers as the WinSock api.
IIS, a web service or anything that wants to use http registers itself with http.sys using a unique url/port combination. http.sys opens up a socket on this port using WSK (if it hasn't already for another url/port combination with the same port) and listens.
When the transport layer (tcpip.sys) has reconstructed a load of ip packets back into an http request that a client sent it gives it to http.sys via the port in the request. Http.sys uses the url/port number to send it the the appropriate process which parses it however it pleases.
I know it seems like I'm answering my own question but I'm really not that sure of myself on this and would like some closure so i can get on with more interesting things.
Am i close?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I wanted to know if there is any way to send larger files than 65,536 bytes by using IPv4
You shouldn't be using raw IP.
There's a reason an implementation of TCP/IP is typically called a "stack". Communication is typically done by layering protocols on top of each other. Each layer takes the layer below it, and either abstracts away some aspect of the lower-level protocol or adds useful functionality.
A web server, for example, ends up using several layers of protocols:
Ethernet, WiFi, or other such protocols, which provides the physical (or radio) connection and signaling rules that enable machines to talk to each other at all.
IP, which adds the concept of routing and globally available addresses.
TCP, which
adds a concept of "ports", which allows several apps to use the same IP address at the same time without stepping on each other's toes;
abstracts the discrete packets of IP into a full-duplex, arbitrary-length stream of bytes; and
adds detection and correction of errors and lost/duplicate data.
SSL and/or TLS (sometimes), which adds semi-transparent encryption (once established);
HTTP, which adds structure to the stream by organizing its contents into messages (requests and responses) that may (and almost always do) include metadata about how the body of the message should be interpreted.
At the API level, you will almost always start with a transport-layer protocol like TCP, UDP, or sometimes SCTP. It's rare for the OS to even allow you to communicate directly via IP, for security reasons.
So in order to transfer a file, you'd need to
Establish a TCP "connection" to the other machine, which will typically be running a service on some known port. (For HTTP, that's typically 80.) This in itself removes any limit IP imposes on data size.
Set up SSL or TLS, if the other end requires it. You probably shouldn't bother with them yet, though. It's an optional, and non-trivial, addition, and most servers provide some way to communicate without it.
Use the application-layer protocol that the other end understands in order to send a request to store a file (and, of course, the file contents).
There is not relationship between the version of IP you are using and the size of file you can transmit Do your homework, please.
That depends what you mean by "files". Large files are sent over the network every day, and it's still like 99% IPv4, so I suppose most correct answer would be "yes". You might want to read on transport protocols, most prominent of which is TCP.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Why do we need sliding window mechanism at the Transport as well as Data Link Layers? TCP has its own sliding window mechanism for managing flows and errors. Also, data link layer has similar mechanisms. Isn't this redundant?
TCP's and UDP's error control are a single checksum covering each packet. If it fails, the entire packet must be discarded, and then, when the receive fails to acknowledge receipt of the data after a timeout, the data must be resent. Even if data corruption was introduced on one data link between two routers along the network path, the data must be resent from the original sender and make the whole journey across multiple network hops again. This is quite expensive, so this kind of data integrity checking is suitable when the error rate is quite low and the cost of retransmitting is amortized over many successfully transmitted packets. Also, the simple checksum is not that robust and it will miss some errors.
Certain kinds of network links might be expected to have error rates that are too high for the IP transport protocols to deal with efficiently, and so they should provide their own layer of error detection (or even forward error correction) in order for IP to work well over them. Originally, (analog) modems are well known to have introduced this kind of integrity protection (e.g. V.42) as their speeds got higher and higher. I don't know much about the details of the kinds of physical links that are popular these days, but I'd say it's a good bet that one or more of DOCSIS, ADSL, wifi, and/or 3G/4G/LTE incorporate this kind of technology. I would also note that I consider all of this to be happening at the physical layer (Layer 1), not at the datalink layer (Layer 2), although debate is possible on that because the OSI layer model is never an exact fit for the real world of networking.
This kind of error control doesn't necessarily imply that the physical layer (or datalink layer, if you prefer) has any kind of sliding window. It might have in some of the more complex schemes designed for the most unreliable kinds of physical links, but all of the simplest kinds of error checking doesn't: for example, the PPP and ethernet FCS. With an FCS, much like with a UDP checksum, a corrupted packet will simply be dropped, and the protocol has no memory or window from which to retransmit the failed frame, and it doesn't acknowledge successfully received frames to the sender (which is necessary in any kind of sliding window protocol to advance the window).
All that being said, the transport layer error control mechanism remains essential because it is end to end. Only at this layer will errors other than errors introduced by the transmission medium be caught. The IP transport protocols' checksums will catch corruption that occurred inside routers, errors introduced by physical media that doesn't or can't catch errors, or errors in host devices or device drivers.
That's for error control. Some of the same can be said about flow control: although some complex schemes can exist to handle kinds of physical links that would be otherwise problematic for IP to work over, most be the simple schemes don't involve any kind of sliding window. For example, when communicating over an RS-232 serial link, flow control is a simple binary control line: when it's asserted, the other end sends data, and when it's deasserted, the other end pauses. It doesn't remember any previously transmitted data in a window and it doesn't receive acknowledgements.
One final comment: UDP is an unreliable transport protocol. When using UDP, the application is responsible for managing timeouts and retransmissions. There is lots of variability in how well individual applications deal with it. Some to it pretty badly. Since (forward) error correction is at least provided by some of the most notoriously unreliable physical link layers, at least the situation is tolerable in that UDP, though unreliable, "usually" works. The same can be said about some non-TCP, non-UDP transport protocols.