Capped performance over UDP (and sometimes TCP) in iPerf - tcp

I can't seem to figure out what's wrong with my iPerf executable. I am trying to automate the execution of iPerf using a Telnet script (this is the one I am using https://github.com/ngharo/Random-PHP-Classes/blob/master/Telnet.class.php). I'd like to know what I can do to find the reason for the bottleneck, assuming the PHP script works as expected. Basically, if I run it manually over the command line, I get the rates desired however if I run it remotely using the script I get capped performance.
What I have tried is using tcpdump to output the logs while iperf is running and then reading it using Wireshark. All I can observe is that the time differences between the fragments are larger when using the script, which means the rates will be lower. I don't know what to do next after this. Any ideas what else I can look at/try? I've tried changing kernel values for buffer sizes using sysctl but this has no effect as running it manually always works anyway.
Note that I have tried to play around with all the iperf configuration options such as -w, -l, -b (I havent tried burst mode). No success.

Related

How to tell .pcapng file frame amount before fully open it?

I get huge .cap files by iptrace on AIX. The file is about 800MB. I'm on MacOS, and tshark has been running for a whole day parsing it.
CPU of my host keeps 99% occupied. I really need to speed it up.
I've already added -n flag of tshark.
I'm thinking about adding frame number range to the filter, which should narrow down the amount of packets for analysis. But I don't know the total amount of frames, therefore can't really add that parameter.
Can I browse some general info about the .cap file before fully open it?
Is there anything else to do to remarkably speed up tshark performance?
Thanks.
Perhaps TShark is stuck in an infinite loop, in which case the problem isn't "the file is too big" (I have a capture that's 776MB, and it takes only a few minutes to run it through tshark -V, albeit on a 2.8 GHz Core i7 MBP with 16GB of main memory), the problem is "TShark/Wireshark have a bug".
File a bug on the Wireshark Bugzilla, specifying the TShark command that's been running for a day (all command-line arguments). You may have to either provide the capture file for testing by the Wireshark developers or run test versions of TShark.

Process stop getting network data

We have a process (written in c++ /managed), which receives network data via tcpip.
After running the process for a while while tracking network load, it seems that network get into freeze state and the process does not getting data, there are other processes in the system that using networking (same nic) which operates normally.
the process gets out of this frozen situation by itself after several minutes.
Any idea what is happening?
Any counter i can track to see if my process reach some limitations ?
It is going to be very difficult to answer specifically,
-- without knowing what exactly is your process/application about,
-- whether it is a network chat application, or a file server/client, or ......
-- without other details about your process how it is implemented, what libraries it uses, if relevant to problem.
Also you haven't mentioned what OS and environment you are running this process under,
there is very little anyone can help . It could be anything, a busy wait loopl in your code, locking problems if its a multi-threaded code,....
Nonetheless , here are some options to check:
If its linux try below commands to debug and monitor the behaviour of the process and see what could be problem-
top
Check top to see ow much resources(CPU, memory) your process is using and if there is anything abnormally high values in CPU usage for it.
pstack
This should stack frames of the process executing at time of the problem.
netstat
Run this with necessary options (tcp/udp) to check what is the stae of the network sockets opened by your process
gcore -s -c
This forces your process to core when the mentioned problem happens, and then analyze that core file using gdb
gdb
and then use command where at gdb prompt to get full back trace of the process (which functions it was executing last and previous function calls.

Make uWSGI use all workers

My application is very heavy (it downloads some data from internet and puts it into a zip file), and sometimes it takes even more than a minute to respond (please, note, this is a proof of concept). CPU has 2 cores and internet bandwidth is at 10% utilization during a request. I launch uWSGI like this:
uwsgi --processes=2 --http=:8001 --wsgi-file=app.py
When I start two requests, they queue up. How do I make them get handled simultaneously instead? Tried adding --lazy, --master and --enable-threads in all combinations, neither helped. Creating two separate instanced does work, but that doesn't seem like a good practice.
are you sure you are not trying to make two connections from the same browser (it is generally blocked) ? try with curl or wget

Get process occupying a port in Solaris 10 (alternative for pfiles)

I am currently using pfiles to get the process occupying certain port in Solaris10,
but it causes problem when run parallely.
problem is pfiles can't be run parallely for the same pid.
the second one will return with error message.
pfiles: process is traced :
Is there any alternative to pfiles to get the process occupying a port in Solaris.
OR any information on OS API's to get port/process information on Solaris could help.
A workaround would be to use some lock mechanism to avoid this.
Alternatively, you might install lsof from a freeware repository and see if it supports concurrency (I think it does).
I just tested Solaris 11 Express pfiles and it doesn't seem to exhibit this issue.

Any program that calculates the throughput from the output of tcpdump?

I am currently using tcptrace but am observing some ridiculous throughputs on a test that I'm running. I am pretty sure something is wrong with the way I am testing but before spending any more time is there any other program I can use to verify this?
EDIT: This is for a router simulator that I am running locally on my system that generates a tcpdump output.
You can use ttcp to measure the TCP performance between two systems.

Resources