I was trying to send ftp packets using tcp. which parameter in ns2 should i use to change the data rate of sender? i tried using the following to vary between 2 Mbps and 8 Mbps but seems like both are giving same results and does not vary the sender data rate.
$ftp($i) set rate_ 2Mb
There is no "set rate_" option for TCP-based applications like FTP in ns2. TCP has its own flow control (e.g. sliding window) and congestion control mechanism to control the data rate.
Whereas the UDP-based traffic applications, e.g Application/Traffic/CBR, has this option.
You can refer to this ns document:
http://www.isi.edu/nsnam/ns/doc/node516.html
You may want to experiment with different TCP window sizes, link bandwidth, and the packet numbers produced by FTP and see what data rate you can achieve.
Related
I am developing a SNMP poller which will poll around 40K devices every hour for CPU,Memory,Bandwidth and Connection Count related information. I am currently using snmp4j API. I am performing a snmpwalk separately for CPU, Memory, Bandwidth and Connection Count, but given the number of devices, this is taking huge amount of time. I am thinking of using SNMP getbulk request to get all the information at once, but this is restricted by the maximum response PDU packet size of the queried device. I wanted to know is there a way to know the maximum PDU response size of the remote system so that I can break up my request PDU accordingly. I have around 2500 OIDs to poll in one request. And also, I am not allowed to modify the response packet size of the remote system.
This has been a problem for 30 years (SNMP is that old): part of device discovery is to determine max response size (in addition to response time, supported versions, etc) of each device.
It's basically a trade-off of discovery time vs. just assuming some minimal capabilities.
I read somewhere (but cannot find the source anymore) that there is a certain maximum number of bytes that can be sent in the first TCP window. Sending more data requires ACK from the receiver, hence another round-trip. To reduce website latency, all above-the-fold content, including HTTP reply headers, should be less than this number of bytes.
Can anybody remember what the maximum number of bytes in the first TCP window is and how it is calculated?
This is regulated by initial tcp congestion window (initcwnd). This parameter determines how many segments (MSS) could be sent without waiting for ACK at first phase of slow start. Currently recommended value for most workloads is 10, but some old systems still using 4. Also note, used window size depends on clients receive window too, so if some client will advertise receive window lower than your initial congestion window, it will be used this receive window as limit.
For more info, refer to this page.
Sender sends the data.
Receiver waits a couple of seconds and then calculates the throughput rate / s
Receiver sends the rate at which its receiving packets (bytes / s) to sender
Sender calculates its rate of sending packets
If the rate of sender is significantly higher, reduce it to match receiving rate.
Alternatively, a more advanced approach:
Sender starts sending at a predefined min rate (eg. 1kb / s)
Receiver sends the calculated receiving rate back to sender.
If the receiving rate is the same as sending rate (taking latency into account) increase the rate by a set pct (eg. rate * 2)
Keep doing this until the sending rate becomes higher than receiving rate.
Keep monitoring the rates to account for changes in bandwidth increase / reduce rate if needed.
Could this work if you were to implement your own UDP congestion control algorithm?
Sure, it is feasible.
Now, would this give you the expected result, or is it sound...
I think you're trying to address a problem that has been studied and standardized by the very smart peoples of IETF. I'd advise you to take a look at RTP/RTCP which sit on top of UDP, would it be only to understand why it's a tough problem and grab some ideas.
https://en.wikipedia.org/wiki/RTP_Control_Protocol
"The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information to participants in a streaming multimedia session."
https://en.wikipedia.org/wiki/Real-time_Transport_Protocol
The fact that it's main use case is audio/video streaming is, I think, not so important: the point of RTCP is to provide feedback among particpants about a UDP stream of data[*].
Be warned that:
These are complicated protocols, because the problem at hand is indeed complicated.
IIRC, RTCP does not define what the sender side should do with these QoS reports. It just formalize the way these reports are exchanged from side to side.
[*]: Well, not enterily true since in A/V, synchronization is an important aspect ("send/receive in a timely manner"), whereas what you're trying to do is "go as fast as possible, yet not too fast".
I've been programming a library for both TCP and UDP networking and thought about using packets. Currently I've implemented a packet class which can be used like the C++ standard library's stream classes (it has << and >> for inputting and reading data). I plan on sending the packets like so:
bytes 1-8 - uint64_t as the size of the packet.
bytes 8-size - contents of the packet.
But there's a problem. What if a malicious client sends a size measured in terabytes and random garble as the filler? The server's memory is filled with the random garble and it will freeze/crash.
Is it a good idea to let the server decide the maximum allowed size of the received packet?
Or should I discard packets and implement transferring data as streams (where reading/writing would be entirely decided by the user of the library)?
(PS: I'm not a native English speaker, so forgive my possibly hideous usage of the language.)
Yes, set a maximum allowed size on the server side. Set it so that the server won't freeze/crash, but not smaller. Predictable behaviour should be the highest goal.
I am trying to simulate "HTTP headers spanning multiple TCP segments" as mentioned here - http://wiki.wireshark.org/HTTP_Preferences.
How can this be done using netcat? Are there any examples you might be able to point me to to get me started?
Netcat isn't really the right tool for this job, but an easy way to make the headers span segments is just to make them long enough. Eventually, they won't fit in a single segment.
The packet size may be 1500 octets (normal Ethernet) or more than 9000 octets (Ethernet with jumbo frames). You'll want some actual network, packet processing with localhost is often optimized.
(For the proper tools for this, you probably want to ask on Severfault or Security.SE, as they're normally used for firewall testing)