I am interested in getting congestion window size of a connection. The connection is created by another program. I am hoping that we can get this congestion window size using some file in proc, or, there is a call to get this info from kernel...
So I need more leads on any of those approaches ...
If you are on Linux, you can use getsockopt() on the socket using the IPPROTO_TCP socket level and TCP_INFO socket option.
The struct tcp_info structure has the member tcpi_snd_cwnd. A fairly extensive write up can be found here.
FreeBSD also has a similar feature.
Windows provides congestion window information with the GetPerTcpConnectionEStats() call using TcpConnectionEstatsSndCong as the stats type.
My solution was to use tcpdump to capture packet retrans , sack, ecn-cwr. Those indicate that window size will be collapsed, each of those has different collapse magnitudes. But its now just a matter of calculating those magnitudes and pluging them in the initcwnd size.
I think this is easier than what jxh suggested.
Related
I'm currently setting up an AZURE RTOS (ThreadX on STM32), with Ethernet, SPI and ADCs activated.
This STM32 has to pass-through configuration information from time to time, coming from my PC over the Ethernet-Port.
It has to pass these information via SPI to two other STM32, which makes the first STM32 the system-controller / system-interface. This will be a low-priority task, since the activation of the passed configuration will be started by sync-lines, running from the system-controller to the two other STMs.
While doing so, the system-controller has to read-in ADC values constantly and pass them via Ethernet / TCP to my computer.
I've used the ThreadX TCP server example, as given by STM, as a starting point.
From there I've managed to set up three servers on three ports, communicating sucessfully with a python script on my PC (as a first test).
Now come the two great questions:
1)
Since my input signal may contain frequencies up to 2.5 MHz, I want to digitize this signal with the full 5 MSPS (Nyquist), which ADC3 is capable of.
The smallest internally available data-type at full resolution is uint16_t, which makes the data rate work out to be R = 16 * 5 MSPS = 80 MBit/s (worst-case, I bet, there is optimization possible ... e.g. 8 bits resolution, which halves the data-rate ... but this resolution might not be enough ... or 16 bits, and FFT afterwards, which is also sufficient, since I'm mostly interested in energy per frequency band, but initially I wanted to do this on my computer, for best flexibility).
Even if the Ethernet-IF is capable of doing 100MBit/s, the TCP layer of NetXDuo, I bet, is not.
(There is also USB OTG on this board available, but since networked devices are in my opinion more versatile, I prefer using Ethernet ... nevertheless, USB might be a backup solution)
From my measurements, a data-stream transmitted to the uC via TC from within python, and mirrored back within a thread to my PC allows for relatively consistent 20 MBit/s.
... How do I push this speed to a better level?
(I think 20MBit/s is the back-and-forth data-rate, so one-way may be faster)
However. Second question:
2)
The ADC within the STM is capable of storing data via DMA to memory.
There are two callbacks available, one at half-full, one at full buffer state.
My problem is mostly about the way of reading out the DMA and/or triggering the conversion in the first place.
How do you do this the "right" way on a RTOS (such that you don't brake the RT in RTOS)?
I see some options here, what are the pros/cons you can think of?
a) Let the ADC run freely, calling the call-backs at the respective fill-levels, triggering a TCP-transmission whenever one of the call-backs is reached
-> may lead to glitches due to insufficient speed of the TCP layer in my opinion.
b) Let the ADC conversion be triggered by a thread, which is preempted and will later TCP-transmit the data, as soon as the memory-buffer is full
-> may lead to inconsistency in the converted values, since you get burst-style conversions, with gaps in between, while the buffer is read
c) Let a thread trigger each conversion individually
-> A no-go I think, since threads are not triggered that often, to get a decent sample-frequency
d) Let a free-running ADC trigger callbacks, let a thread do the FFT, transmit within another thread the data via TCP
-> May work, but is less flexible, since the data gets crunched within the uC.
--> Are there other ways you can think of / what do you think about the ways I named here?
--> What do you think about question 1)?
Have a nice day!
I was writing a TCP implementation, did all the fancy slow and fast retransmission stuff, and it all worked so I thought I was done. But then I reviewed my packet receive function (almost half of the 400 lines total code), and realized that my understanding of basic flow control is incomplete...
Suppose we have a TCP connection with a "sender" and "receiver". Suppose that the "sender" is not sending anything, and the receiver is stalling and then unstalling.
Since the "sender" is not sending anything, the "receiver" sees no ack_no delta. So the two window updates from the "receiver" look like:
ack_no = X, window = 0
ack_no = X, window = 8K
since both packets have the same ack_no, and they could be reordered in transit, how does the sender know which came first?
If the sender doesn't know which came first, then, after receiving both packets, how does it know whether it's allowed to send?
One guess is that maybe the window's upper endpoint is never allowed to decrease? Once the receiver has allocated a receive buffer and advertised it, it can never un-advertise it? In that case the window update could be reliably handled via the following code (assume no window scale, for simplicity):
// window update (https://stackoverflow.com/questions/63931135/)
int ack_delta = pkt_ack_no - c->tx_sn_ack;
c->tx_window = MAX(BE16(PKT.l4.window), c->tx_window - ack_delta);
if (c->tx_window)
Net_Notify(); // wake up transmission
But this is terrible from a receiver standpoint: it vastly increases the memory you'd need to support 10K connections reliably. Surely the protocol is smarter than that?
There is an assumption that the receive buffer never shrinks, which is intentionally undocumented to create an elite "skin in the game" club in order to limit the number of TCP implementations.
The original standard says that shrinking the window is "discouraged" but doesn't point out that it can't work reliably:
The mechanisms provided allow a TCP to advertise a large window and
to subsequently advertise a much smaller window without having
accepted that much data. This, so called "shrinking the window," is
strongly discouraged.
Even worse, the standard is actually missing the MAX operation proposed in the question, and just sets the window from the most recent packet if the acknowledgement number isn't increasing:
If SND.UNA < SEG.ACK =< SND.NXT, the send window should be
updated. If (SND.WL1 < SEG.SEQ or (SND.WL1 = SEG.SEQ and
SND.WL2 =< SEG.ACK)), set SND.WND <- SEG.WND, set
SND.WL1 <- SEG.SEQ, and set SND.WL2 <- SEG.ACK.
Note that SND.WND is an offset from SND.UNA, that SND.WL1
records the sequence number of the last segment used to update
SND.WND, and that SND.WL2 records the acknowledgment number of
the last segment used to update SND.WND. The check here
prevents using old segments to update the window.
so it will fail to grow the window if packets having the same ack number are reordered.
Bottom line: implement something that actually works robustly, not what's in the standard.
I am generating a certain signal (digital pulse) in one of my verilog module running on programmable logic in Xilinx Zynq chip. Signal is pretty fast, with clock of about 200MHz.
I also have a simple linux and framebuffer Qt interface running for later controlling my application.
How can I sample my signal in order to make oscilloscope like interface inside my Qt app? I want to be able to provide visual of the pulse I am generating.
What do I need to use to be able to sample enough data at such clock frequency? And how do I pass it with kernel module or mmap to Qt?
You would do best to do what most oscilloscopes do: sample the data to RAM, and only then transfer it to the processor for display/analaysis, at a more "relaxed" pace.
On the FPGA side you will need a state machine that detects some sort of start or trigger condition, probably after a bit in a mode register is set from the software side to arm it.
The state machine will then fill samples into a buffer made of one or more block rams. If you want to placing the trigger somewhere in the middle of the samples captured, you should it as a circular buffer, and have it record continuously, stopping configurable number of samples after the trigger, so that some desired number from before the trigger condition remain un-overwritten by newer ones following it.
Since FPGA block rams are typically dual port, you can simply hook the other port up to your CPU bus for readout. You will probably want a register to read the state of the sampling state machine, and if you go with the circular buffer approach, the address where it stopped, so that you can unwrap the data to a linear record of time.
Trying to do streaming realtime sampling might be possible, but would be a lot harder and it is not clear that you could do anything meaningful with the data so produced in real time. Still, if you want to try you would probably need to put a FIFO buffer in between the sampling and the processor bus, as you will probably only be able to consume data in chunks, while having to service other operations in between, so something is needed to absorb the constant-rate inflow of samples. Another approach could be to try to build a DMA engine which would write samples directly to external system ram, but that will likely be even harder.
You could also see if there are any high speed interfaces available in the CPU which you could leverage - they might be things originally intended for video, for example.
It also appears that you are measuring only a digital signal, ie, probalby one bit. If you want to handle a higher input sample rate than the FPGA fabric can support, that could mean you could potentially use something like a deserializer block at the edge of the FPGA to turn the 1-bit input stream into a slower stream of wider samples to store.
In terms of output, once you have a vector of samples in a buffer it's pretty simple to turn that into a scope/logic analyzer type plot, with as much zooming, cursor annotation, automatic measurement or whatever you like.
Also don't forget that if the intent is only to use this during development, FPGAs and their tools often have the ability to build a logic analyzer right into the design, with the data claimed over the programming interface for plotting on a PC.
i need to create some statistics from packets in my network interface, but i'm concerned only for my tcp sessions. i thought i could do that with nfdump and nfsen. because i'm new to this stuff, i dont really get what nfdump defines as 'flow'.
furthermore, can i get statistics with these tools only for the tcp protocol sessions? i mean, for example, that i need to have some average duration of all the connections(srcip-srcport, dstip-dstport pairs) in a server of mine. And for this reason i need the time between the 3WH and the closing of each connection (either with [fin/ack,ack] or with [rst]). Is that possible with nfdump-nfsen?
Short answer here is no: you don't have anything in your list of software that generates netflow information. It's not going to work. Netflow collectors do not work as hard as you might like to maintain your idea of a connection - a flow is just a collection of related packets that happen during part of the collection cycle. For a long-lived session, you can expect to see a few flows.
For your application, you will do better to capture syn-ack and fin packets with tcpdump and analyse the timing of these with your favourite text processing tool.
Also, on the left side of your keyboard, you may find a key with an arrow that allows you to type capital letters.
How do I discover how many bytes have been sent to a TCP socket but have not yet been put on the wire?
Looking at the diagram here:
I would like to know the total of Categories 2, 3, and 4 or the total of 3 and 4. This is in C(++) and on both Windows and Linux. Ideally there is a ioctl that I could use, but there doesn't seem to be any.
Under Linux, see the man page for tcp(7).
It appears that you can get the number of untransmitted bytes by ioctl(sock,SIOCINQ ...
Other stats might be available from members of the structure given back by the TCP_INFO getsockopt() call.
Some Unix flavors may have an API way to do this, but there is no way to do it that is portable across different variants.
If you want to determine wheter to add data or not: don't worry, send will block until the data is in the queue. If you don't want it to block, you can tell it to send(2):
send(socket, buf, buflen, MSG_DONTWAIT);
But this only works on Linux.
You can also set the socket to non-blocking:
fcntl(socket, F_SETFD, O_NONBLOCK);
This way write will return an error (EAGAIN) if the data cannot be written to the stream.