Minimal size of a network buffer - tcp

I'm a student and I'm taking right now an Operating Systems course. I've stumbled upon a strange answer for a question while learning for exam and I couldn't find an explanation for it.
Question: Suppose we have an Operating System which runs on low physical memory. Thus the designers decided to make the buffer (that handles all the work that is connected to the network) as small as possible. What can be the smallest size of the buffer?
Answer: Can't be implemented with one byte only, but can be implemented with 2 bytes size.
My thoughts: It has 4 answers, one of them is "3 bytes or more" so I thought that it's the right answer because in order to establish a connection you need at list to be able to send a header of tcp/udp or similar package that contains all the connection info, so I have no idea why it's the right answer (according to the reference). Maybe some degenerate case?
Thanks for help.

The buffer has to be at least as large as the packet size on the network. That will depend upon the type of hardware interface. I know of no network system, even going back to the days of dialup, that used anything close to 2 bytes.
Maybe, in theory, you could have a network system that used 2-byte packets. The same logic would allow you to use 1-byte packets (transmitting fractions of a byte in a packets).
Sometimes I wonder about the questions CS professors come up with. I guess that's why:
Those who can do, do;
Those who can't do, teach;
Those who can't do and can't teach, teach PE.

Related

How do clients on wireless networks decide who can transmit at any given time?

I've been thinking about wireless networking a little bit recently, and I came upon a realization last night that I can't find an answer to: how do clients know when they can transmit and not stomp over another clients' transmission?
I assume there is documentation for this sort of thing available, but I've been unable to find anything useful over a half hour of casual Google queries, probably because I don't know the right terms. Apologies in advance if this is a silly question . . .
Here's why I'm confused: based on my understanding of how RF hardware works, we can model the transmission medium as a safe shared register between different RF clients (because what one client broadcasts can be overwritten by other clients and get a muddle between the two). But safe registers only have consensus number 1, so how can we establish who can transmit at any given point? I'm assuming that only one client can transmit at once -- perhaps this is my fundamental misunderstanding?
Even the use of a randomized consensus protocol seems unwieldy, because the only ones I know of use atomic registers, not safe registers, and also have no upper bound, so two identical devices with the same random seed would proceed for a very long time.
Thanks!
Please check: Carrier sense multiple access with collision avoidance

Endianess of network data transmissions over TCP/IP

Here is a question I've been trying to solve since quite some time ago. This does not attain a particular languaje, although it's not really beneficial for some that have a VM that specifies endianess. I know, like the 99.9999% of people that use sockets to send data using TCP/IP, that the protocol specifies a endianess for the transmission elements, like destination address, port and such. The thing I don't know is if it requires the payload to be in a specific format to prevent incompatibilities.
For example, let's say I develop a protocol that is not a presentation layer, and that I, due to the inmense dominance that little endian devices have nowadays, decide to make it little endian (for example the positions of the players and such are transmitted in little endian order). For example a network module for a game engine, where latencies matter and byte conversion would cost a noticeable amount of time. Of course the address, port and all of that data that is protocol related would be specified in big endian as is mandatory, I'm talking about the payload, and only that.
Would that protocol work out of the box (translating the contents as necessary, of course, once the the transmission is received) on a big endian machine? Or would the checksums of the IP protocol or something of the kind get computed wrong since the data is in a different order, and the programmer does not have control of them if raw_sockets aren't used?
Since the whole explanation can be misleading, feel free to ask for clarifications.
Thank you very much.
The thing I don't know is if it requires the payload to be in a specific format to prevent incompatibilities.
It doesn't, and it doesn't have a way of telling. To TCP it's just a byte-stream. It is up to the application protocol to decide endian-ness, and it is up to the implementors at each end to implement it correctly. There is a convention to use big-endian, but there's no compulsion.
Application-layer protocols dictate their own endianness. However, by convention, multi-byte integer values should be sent in network-byte order (big endian) for consistency across platforms, such as by using platform-provided hton...() (host-to-network) and ntoh...() (network-to-host) function implementations in your code. On little-endian systems, they will do the necessary byte swapping. On big endian systems, they are no-ops. The functions provide an abtraction layer so code does not have to worry about that.

What is the best compression library for very small amounts of data (3-4 kib?)

I am working on a game engine which is loosely descended from Quake 2, adding some things like scripted effects (allowing the server to specify special effects in detail to a client, instead of having only a limited number of hardcoded effects which the client is capable of.) This is a tradeoff of network efficiency for flexibility.
I've hit an interesting barrier. See, the maximum packet size is 2800 bytes, and only one can go out per client per frame.
Here is the script to do a "sparks" effect (could be good for bullet impact sparks, electrical shocks, etc.)
http://pastebin.com/m7acdf519 (If you don't understand it, don't sweat it; it's a custom syntax I made and not relevant to the question I am asking.)
I have done everything possible to shrink the size of that script. I've even reduced the variable names to single letters. But the result is exactly 405 bytes. Meaning you can fit at most 6 of these per frame. I also have in mind a few server-side changes which could shave it down another 12, and a protocol change which might save another 6. Although the savings would vary depending on what script you are working with.
However, of those 387 bytes, I estimate that only 41 would be unique between multiple usages of the effect. In other words, this is a prime candidate for compression.
It just so happens that R1Q2 (a backward-compatible Quake 2 engine with an extended network protocol) has Zlib compression code. I could lift this code, or at least follow it closely as a reference.
But is Zlib necessarily the best choice here? I can think of at least one alternative, LZMA, and there could easily be more.
The requirements:
Must be very fast (must have very small performance hit if run over 100 times a second.)
Must cram as much data as possible into 2800 bytes
Small metadata footprint
GPL compatible
Zlib is looking good, but is there anything better? Keep in mind, none of this code is being merged yet, so there's plenty of room for experimentation.
Thanks,
-Max
EDIT: Thanks to those who have suggested compiling the scripts into bytecode. I should have made this clear-- yes, I am doing this. If you like you can browse the relevant source code on my website, although it's still not "prettied up."
This is the server-side code:
Lua component: http://meliaserlow.dyndns.tv:8000/alienarena/lua_source/lua/scriptedfx.lua
C component: http://meliaserlow.dyndns.tv:8000/alienarena/lua_source/game/g_scriptedfx.c
For the specific example script I posted, this gets a 1172 byte source down to 405 bytes-- still not small enough. (Keep in mind I want to fit as many of these as possible into 2800 bytes!)
EDIT2: There is no guarantee that any given packet will arrive. Each packet is supposed to contain "the state of the world," without relying on info communicated in previous packets. Generally, these scripts will be used to communicate "eye candy." If there's no room for one, it gets dropped from the packet and that's no big deal. But if too many get dropped, things start to look strange visually and this is undesirable.
LZO might be a good candidate for this.
FINAL UPDATE: The two libraries seem about equivalent. Zlib gives about 20% better compression, while LZO's decoding speed is about twice as fast, but the performance hit for either is very small, nearly negligible. That is my final answer. Thanks for all other answers and comments!
UPDATE: After implementing LZO compression and seeing only sightly better performance, it is clear that my own code is to blame for the performance hit (massively increased number of scripted effects possible per packet, thus my effect "interpreter" is getting exercised a lot more.) I would like to humbly apologize for scrambling to shift blame, and I hope there are no hard feelings. I will do some profiling and then maybe I will be able to get some numbers which will be more useful to someone else.
ORIGINAL POST:
OK, I finally got around to writing some code for this. I started out with Zlib, here are the first of my findings.
Zlib's compression is insanely great. It is reliably reducing packets of, say, 8.5 kib down to, say, 750 bytes or less, even when compressing with Z_BEST_SPEED (instead of Z_DEFAULT_COMPRESSION.) The compression time is also pretty good.
However, I had no idea the decompression speed of anything could even possibly be this bad. I don't have actual numbers, but it must be taking 1/8 second per packet at least! (Core2Duo T550 # 1.83 Ghz.) Totally unacceptable.
From what I've heard, LZMA is a tradeoff of worse performance vs. better compression when compared to Zlib. Since Zlib's compression is already overkill and its performance is already incredibly bad, LZMA is off the table sight unseen for now.
If LZO's decompression time is as good as it's claimed to be, then that is what I will be using. I think in the end the server will still be able to send Zlib packets in extreme cases, but clients can be configured to ignore them and that will be the default.
zlib might be a good candidate - license is very good, works fast and its authors say it has very little overhead and overhead is the thing that makes use for small amounts of data problematic.
you should look at OpenTNL and adapt some of the techniques they use there, like the concept of Network Strings
I would be inclinded to use the most significant bit of each character, which is currently wasted, by shifting groups of 9 bytes leftwards, you will fit into 8 bytes.
You could go further and map the characters into a small space - can you get them down to 6 bits (i.e. only having 64 valid characters) by, for example, not allowing capital letters and subtracting 0x20 from each character ( so that space becomes value 0 )
You could go further by mapping the frequency of each character and make a Huffman type compression to reduce the avarage number bits of each character.
I suspect that there are no algorithms that will save data any better that, in the general case, as there is essentially no redundancy in the message after the changes that you have alrady made.
How about sending a binary representation of your script?
So I'm thinking in the lines of a Abstract Syntax Tree with each procedure having a identifier.
This means preformance gain on the clients due to the one time parsing, and decrease of size due to removing the method names.

How to synchronize media playback over an unreliable network?

I wish I could play music or video on one computer, and have a second computer playing the same media, synchronized. As in, I can hear both computers' speakers at the same time, and it doesn't sound funny.
I want to do this over Wi-Fi, which is slightly unreliable.
Algorithmically, what's the best approach to this problem?
EDIT 1
Whether both computers "play" the same media, or one "plays" the media and streams it to the other, doesn't matter to me.
I am certain this is a tractable problem because I once saw a demo of Wi-Fi speakers. That was 5+ years ago, so I'm figure the technology should make it easier today.
(I myself was looking for an application which did this, hoping I wouldn't have to write one myself, when I stumbled upon this question.)
overview
You introduce a bit of buffer latency and use a network time-synchronization protocol to align the streams. That is, you split the stream up into packets, and timestamp each packet with "play later at time T", where T is for example 50-100ms in the future (or more if the network is glitchy). You send (or multicast) the packets on the local network, to all computers in the chorus. The computers will all play the sound at the same time because the application clock is synced.
Note that there may be other factors like OS/driver/soundcard latency which may have to be factored into the time-synchronization protocol. If you are not too discerning, the synchronization protocol may be as simple as one computer beeping every second -- plus you hitting a key on the other computer in beat. This has the advantage of accounting for any other source of lag at the OS/driver/soundcard layers, but has the disadvantage that manual intervention is needed if the clocks become desynchronized.
hybrid manual-network sync
One way to account for other sources of latency, without constant manual intervention, is to combine this approach with a standard network-clock synchronization protocol; the first time you run the protocol on new machines:
synchronize the machines with manual beat-style intervention
synchronize the machines with a network-clock sync protocol
for each machine in the chorus, take the difference of the two synchronizations; this is the OS/driver/soundcard latency of each machine, which they each keep track of
Now whenever the network backbone changes, all one needs to do is resync using the network-clock sync protocol (#2), and subtract out the OS/driver/soundcard latencies, obviating the need for manual intervention (unless you change the OS/drivers/soundcards).
nature-mimicking firefly sync
If you are doing this in a quiet room and all machines have microphones, you do not even need manual intervention (#1), because you can have them all follow a "firefly-style" synchronizing algorithm. Many species of fireflies in nature will all blink in unison. http://tinkerlog.com/2007/05/11/synchronizing-fireflies/ describes the algorithm these fireflies use: "If a firefly receives a flash of a neighbour firefly, it flashes slightly earlier." Flashes correspond to beeps or buzzes (through the soundcard, not the mobo piezo buzzer!), and seeing corresponds to listening through the microphone.
This may be a bit awkward over very large room distances due to the speed of sound, but I doubt it'll be an issue (if so, decrease rate of beeping).
The synchronization is relative to the position of the listener relative to each speaker. I don't think the reliability of the network would have as much to do with this synchronization as it would the content of the audio stream. In order to synchronize you need to find the distance between each speaker and the listener. Find the difference between each of those values and the value for the farthest speaker. For each 1.1 feet of difference, delay each of the close speakers by 1ms. This will ensure that the audio stream reaches the listener at the same time. This all assumes an open area, as any in proximity to your scenario will generate reflections of the audio waves and create destructive interference. Objects within the area may also transmit sound at a slower speed resulting in delayed sound of their own.

Why I cannot get equal upload and download speed on symmetrical channel?

I'm assigned to a project where my code is supposed to perform uploads and downloads of some files on the same FTP or HTTP server simultaneously. The speed is measured and some conclusions are being made out of this.
Now, the problem is that on high-speed connections we're getting pretty much expected results in terms of throughput, but on slow connections (think ideal CDMA 1xRTT link) either download or upload wins at the expense of the opposite direction. I have a "higher body" who's convinced that CDMA 1xRTT connection is symmetric and thus we should be able to perform data transfer with equivalent speeds (~100 kbps in each direction) on this link.
My measurements show that without heavy tweaking the code in terms of buffer sizes and data link throttling it's not possible to have same speeds in forementioned conditions. I tried both my multithreaded code and also created a simple batch file that automates Windows' ftp.exe to perform data transfer -- same result.
So, the question is: is it really possible to perform data transfer on a slow symmetrical link with equivalent speeds? Is a "higher body" right in their expectations? If yes, do you have any suggestions on what should I do with my code in order to achieve such throughput?
PS.
I completely re-wrote the question, so it would be obvious it belongs to this site.
CDMA 1x consists of up to 15 channels of 9.6kbps traffic. This results in a total throughput of 144kbps.
Two channels are used for command and control signals (talking to base stations, associating/disassociating, SMS traffic, ring signals, etc).
That leaves you with up to 124.8kbps.
--> Each channel is one way. <--
They are dynamically switched and allocated depending on the need.
Generally you'll get more download than upload because that's the typical cell phone modem usage. But you'll never get more than 120kbps total aggregate bandwidth.
In practise, due to overhead of 1xRTT encoding, error correction, resends, etc, you'll typically experience between 60kbps and 90kbps even if you have all the channels possible.
This means that you can probably only get 30kbps-60kbps of upload and download simultaneously.
Further, due to switching the channels dynamically (and the fact that the base station controls this more than your modem - they need to manage base station channels carefully to keep channels free for voice calls) you'll lose time when it switches channels - it's not an instantaneous process.
So - 1xRTT can, in theory, give you 124kbps one way, but due to overhead, switching times, base station capacity, or the phone company simply limiting such connections for other reasons, you can't depend on a symmetrical link.
NOTE:
This will vary to some degree based on the provider and the modem. For instance, some modems have 16 channels, and some providers support 16 channels. In some cases those modems and providers work well together and can provide a full 144kbps aggregate raw bandwidth to the application, with only one dedicated channel (which has to work pretty hard) to deal with control, switching, and other issues. Even then, though, with the overhead of the modem communications, then the overhead of PPP, then the overhead of IP, then the overhead of TCP, you're still looking at maybe 100-120kbps total bandwidth, both up and down.
Lastly, no provider yet supports transparent transfer of IP traffic. In other words if you're modem is moving, the modem will switch to a new base station, but you'll completely drop the PPP session and have to restart it, as well as all the TCP sessions and such. You typically won't get the same IP address, and so your TCP sessions will not recover gracefully.
The "fun" aspect to this twist is that this can happen even if you aren't moving. If one base station gets loaded down, you may be transferred to another base station if you are close enough - there are other things that may make your modem transfer even without you moving. So make sure you take this into account, since you seem to be keen on maintaining a full duplex, symmetric channel open. It's tough to write stuff that will recover gracefully, nevermind anticipate it and do it quickly. You would do well to work very closely with a modem manufacturer (such as Kyocera) on this - otherwise you won't get the documentation on how to control the modem chipset at the low level that you need.
-Adam
I think the whole drama with high equal speeds on both directions is because my higher body thinks that they have 144 kbps on uplink AND 144 kbps on DOWNLINK (== TWO pipes). Whereas in reality we have 144 kbps of ONE pipe which is switching directions when I transfer files.
Comment me if I right or wrong, please.

Resources