ISN initial sequence number - tcp

I had a bug that indexed into a buffer that preceeded the buffer containing a SYN packet and it dropped a byte into the ISN of this packet. Unfortunaely even though the packet was sent I got nothing back from the other end. Now this surprised me. I didn't think the other end cared how this ISN was calculated, just that it should be difficult for someone to guess??. Surely not every TCP/IP software has the same calculation such that the other end flagged this one as an invalid packet?!?
Please can some kind soul explain this to my ignorant self.
nb. I believe the packet is going onto the wire as:
1. Wirehark displays it and
2. the light on my network card winks at me.
Othrwise I mght have thought my machine checked the ISN and stopped it.
I guess I'd not make the grade of a Security analyst if I cant understand this, hey!

Related

Minimal size of a network buffer

I'm a student and I'm taking right now an Operating Systems course. I've stumbled upon a strange answer for a question while learning for exam and I couldn't find an explanation for it.
Question: Suppose we have an Operating System which runs on low physical memory. Thus the designers decided to make the buffer (that handles all the work that is connected to the network) as small as possible. What can be the smallest size of the buffer?
Answer: Can't be implemented with one byte only, but can be implemented with 2 bytes size.
My thoughts: It has 4 answers, one of them is "3 bytes or more" so I thought that it's the right answer because in order to establish a connection you need at list to be able to send a header of tcp/udp or similar package that contains all the connection info, so I have no idea why it's the right answer (according to the reference). Maybe some degenerate case?
Thanks for help.
The buffer has to be at least as large as the packet size on the network. That will depend upon the type of hardware interface. I know of no network system, even going back to the days of dialup, that used anything close to 2 bytes.
Maybe, in theory, you could have a network system that used 2-byte packets. The same logic would allow you to use 1-byte packets (transmitting fractions of a byte in a packets).
Sometimes I wonder about the questions CS professors come up with. I guess that's why:
Those who can do, do;
Those who can't do, teach;
Those who can't do and can't teach, teach PE.

Why does the number of links not matter in the transmission time of circuit switching?

I am reading a book about networking, and it says that, in a circuit switching environment, the number of links and switches a signal has to go through before getting to destination does not affect the overall time it takes for all the signal to be received. On the other hand, in a packet switching scenario the number of links and switches does make a difference. I spent quite a bit of time trying to figure it out, but I can't seem to get it. Why is that?
To drastically over-simplify, effectively a circuit-switched environment has a direct line from the transmitter to the transmitted once a connection has been established; imagine an old-fashioned phone-call going through switchboards. Therefore the transmission time is the same regardless of hops (well, ignoring the physical time it takes the signal to move over the wire, which very small since it's moving at the speed of light).
In a packet-switched environment, there is no direct connection. A packet of data is sent from the transmitter to the first hop, which tries to calculate an open route to the destination. It then passes its data onto the next hop, which again has to calculate the next available hop, and so on. This takes time that linearly increases with the number of hops. Think sending a letter through the US postal system. It has to go from your house to a post office, then the post office to a local distribution center, then from the local distribution center to the national one, then from that to the recipient's local distribution center, then to the recipient's post office, then finally to their house.
The difference is that only one connection at a time per circuit can exist on a circuit-switched network; again, think phone line with someone using it to make a call. Whereas in a packet switched network many transmitters and receivers can be sending data at the same time; again, think many people sending/recieving letters.

Data Error Checking

I've got a bit of an odd question. A friend of mine and I thought it would be funny to make a serial port kind of communication between computers using sound. Basically, computers emit a series of beeps to send data, and listen for beeps over a microphone to receive data. In short, the world's most annoying serial port. I have all of the basics worked out. I can filter out sounds of only one frequency and I have sent data from one computer to another. Although the transmission is fairly error free, being affected only by very loud noises, some issues still exist. My question is, what are some good ways to check the data for errors and, more importantly, recover from these errors.
My serial communication is very standard once you get past the fact it uses sound waves. I use one start bit, 8 data bits, and one stop bit in every frame. I have already considered Cyclic Redundancy Checks, and I plan to factor this into my error checking, but CRCs don't account for some of the more insidious issues. For example, consider sending two bytes of data. You send the first one, and it received correctly, but just after the stop bit of the first byte, and the start bit of the next, a large book falls on the floor, which the receiver interprets to be a start bit, now the true start bit is read as part of the data and the receiver could be reading garbage data for many bytes to come. Eventually, a pause in the data could get things back on track.
That isn't the worst of it though. Bits can be dropped too, and most error checking schemes I can think of rely on receiving a certain number of bytes. What happens when the receiver keeps waiting for bytes that may not come?
So, you can see the complexity of this question. If you can direct me to any resources, or just give me a few tips, I would greatly appreciate your help.
A CRC is just a part of the solution. You can check for bad data but then you have to do something about it. The transmitter has to re-send the data, it needs to be told to do that. A protocol.
The starting point is that you split up the data into packets. A common approach is a start byte that indicates the start of the packet, followed by a packet number, followed by a length byte that indicates the length of the packet. Followed by the data bytes and the CRC. The receiver sends an ACK or NAK back to indicate success.
This solves several problems:
you don't care about a bad start bit anymore, the pause you need to recover is always there
you start a timer when you receive the first bit or byte, declare failure when the timer expires before the entire packet is received
the packet number helps you recover from bad ACK/NAK returns. The transmitter times out and resends the packet, you can detect the duplicate
RFC 916 describes such a protocol in detail. I never heard of anybody actually implementing it (other than me). Works pretty well.

tcp reno, newreno and slow start

When packet loss occurs while in slow start, does the reno/newreno algorithms notice possible dupacks, or is it purely slowstart -> rto?
Thus, if sending two packets (in start of slow start), and first one goes missing, does slow start do anything else but rto?
It is confusing, since rfc states that 'in practice they (slow start & congestion avoidance) are implemented together'. And linux source is a bit thick read and only one implementation.
When packet loss occurs while in slow
start, does the reno/newreno
algorithms notice possible dupacks, or
is it purely slowstart -> rto?
I would say "yes", duplicate ACKs will be detected and acted upon. See RFC 2001, Section 2.3.
Thus, if sending two packets (in start
of slow start), and first one goes
missing, does slow start do anything
else but rto?
This particular example would lead to a "simple RTO". During the beginning of slow start when only two packets can be sent there will at most be one duplicate ACK (triggered by the second packet arriving). There might even be none if both packets are (would be) acknowledged together. But one duplicate ACK does not trigger fast retransmit. So TCP will wait for the retransmission timer to expire.
It is confusing, since rfc states that
'in practice they (slow start &
congestion avoidance) are implemented
together'. And linux source is a bit
thick read and only one
implementation.
I agree that the linux source is a thick read. But it's definitive and if you really need to know might be the only option :) Unless you find someone who read (or wrote) it; which I have not.

strategy for hit detection over a net connection, like Quake or other FPS games

I'm learning about the various networking technologies, specifically the protocols UDP and TCP.
I've read numerous times that games like Quake use UDP because, "it doesn't matter if you miss a position update packet for a missile or the like, because the next packet will put the missile where it needs to be."
This thought process is all well-and-good during the flight path of an object, but it's not good for when the missile reaches it's target. If one computer receives the message that the missile reached it's intended target, but that packet got dropped on a different computer, that would cause some trouble.
Clearly that type of thing doesn't really happen in games like Quake, so what strategy are they using to make sure that everyone is in sync with instantaneous type events, such as a collision?
You've identified two distinct kinds of information:
updates that can be safely missed, because the information they carry will be provided in the next update;
updates that can't be missed, because the information they carry is not part of the next regular update.
You're right - and what the games typically do is to separate out those two kinds of messages within their protocol, and require acknowledgements and retransmissions for the second type, but not for the first type. (If the underlying IP protocol is UDP, then these acknowledgements / retransmissions need to be provided at a higher layer).
When you say that "clearly doesn't happen", you clearly haven't played games on a lossy connection. A popular trick amongst the console crowd is to put a switch on the receive line of your ethernet connection so you can make your console temporarily stop receiving packets, so everybody is nice and still for you to shoot them all.
The reason that could happen is the console that did the shooting decides if it was a hit or not, and relays that information to the opponent. That ensures out of sync or laggy hit data can be deterministically decided. Even if the remote end didn't think that the shot was a hit, it should be close enough that it doesn't seem horribly bad. It works in a reasonable manner, except for what I've mentioned above. Of course, if you assume your players are not cheating, this approach works quite reasonably.
I'm no expert, but there seems to be two approaches you can take. Let the client decide if it's a hit or not (allows for cheating), or let the server decide.
With the former, if you shoot a bullet, and it looks like a hit, it will count as a hit. There may be a bit of a delay before everyone else receives this data though (i.e., you may hit someone, but they'll still be able to play for half a second, and then drop dead).
With the latter, as long as the server receives the information that you shot a bullet, it can use whatever positions it currently has to determine if there was a hit or not, then send that data back for you. This means neither you nor the victim will be aware of you hit or not until that data is sent back to you.
I guess to "smooth" it out you let the client decide for itself, and then if the server pipes in and says "no, that didn't happen" it corrects. Which I suppose could mean players popping back to life, but I reckon it would make more sense just to set their life to 0 and until you get a definitive answer so you don't have weird graphical things going on.
As for ensuring the server/client has received the event... I guess there are two more approaches. Either get the server/client to respond "Yeah, I received the event" or forget about events altogether and just think about everything in terms of state. There is no "hit" event, there's just HP before and after. Sooner or later, it'll receive the most up-to-date state.

Resources