Once the client finishes the process is automatically closed. I would like to do the same thing in the server side, because I want to automatize some processes, but the server side finishes but remains open.
In iperf3, you can just give the -1 parameter and it will close automatically. It only accepts one connection and it will exit when that is finished.
Example:
% iperf3 -s -B 192.168.20.10 -p 70011 -1
I think it depends on the version. I can speak for iperf 2 where we recently added this capability. When the -server is launched there will ultimately be two threads per the "server", a listener thread and a traffic (receiver/server) thread. So -t does a few things, it sets the listener thread timeout and the traffic threads' times. The listener thread is the parent of the traffic thread so it must wait for the traffic threads to complete before it can terminate.
Example: Let's say one issues iperf -s -t 30 which will keep the listener around for 30 seconds. If no clients present themselves within 30 seconds the "server" terminates after 30 seconds. But if 20 seconds after the iperf -s -t 30 a client connect, e.g. iperf -c <server> -t 30, then the listener/server will to stay around for 20 + 30 seconds before terminating. (Note: The client's -t <value> isn't passed to the server so the server -t needs to be equal or greater than the clients -t.)
In server side of iperf there is no -t option for time limitting. You can use -P option for limiting the incoming clients.
For example if you run iperf -s -P 1 command, after the client finishes the test, the server shuts itself down.
use iperf option -t . So that it will stop after t seconds. Default iperf client timeout is 10 seconds. so it stops after that.
Try. Here both will stop after 10 seconds.
Server: iperf -s -t 10
Client: iperf -c <ipaddress> -t 10
Start it in background, wait until it's complete and after kill it.
iperf -s -w 2Mb -p 5001 &
sleep 20
pkill iperf
Related
I would like to do a TCP DoS attack using iperf in my simulated network. (I use mininet). The only code that I could find is the following command for making UDP burst traffic in my network which is not relevant.
(host1: 10.0.0.1) iperf -s
(host2: 10.0.0.2) iperf -c 10.0.0.1 -b 30M -l 1200
Please let me know if there is a better code to do the TCP DoS attack using iperf or even if, there is any other code or approach to make TCP traffic as an attack.
Thanks in advance.
The only thing I could do is that, just to add number of iperf tx form attacker using threads. In this way,it sends packet in parallel to the server. So, I used the following code:
host1: 10.0.0.1) iperf -s
(host2: 10.0.0.2) iperf -c 10.0.0.1 -b 30M -l 1200 -P 6
If you want to send UDP flooding, then you must use -u switch on the server command:
iperf -s -u
on the client side, using your specification, it will be:
iperf -c 10.0.0.1 -t 200 -l 1200 -P 6
iperf is suitable for bandwidth testing. If you want to do ddos attack, please try hping3 or dperf.
I'm using ucarp over linux bonding for high availability and automatic failover of two servers.
Here are the commands I used on each server for starting ucarp :
Server 1 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.229 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
Server 2 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.242 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
and the content of the scripts :
vip-up.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr add "$2"/24 dev "$1"
vip-down.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr del "$2"/24 dev "$1"
Everything works well and the servers switch from one to another correctly when the master becomes unavailable.
The problem is when I unplug both servers from the switch for a too long time (approximatively 30 min). As they are unplugged they both think they are master,
and when I replug them, the one with the lowest ip address tries to stay master by sending gratuitous arps. The other one switches to backup as expected, but I'm unable to access the master through its virtual ip.
If I unplug the master, the second server goes from backup to master and is accessible through its virtual ip.
My guess is that the switch "forgets" about my servers when they are disconnected from too long, and when I reconnect them, it is needed to go from backup to master to update correctly switch's arp cache, eventhough the gratuitous arps send by master should do the work. Note that restarting ucarp on the master does fix the problem, but I need to restart it each time it was disconnected from too long...
Any idea why it does not work as I expected and how I could solve the problem ?
Thanks.
Below is by code for spawing a fcgi script for nginx.
spawn-fcgi -d /home/ubuntu/workspace -f /home/ubuntu/workspace/index.py -a 127.0.0.1 -p 9001
Now, lets I want to make changes to the index.py script and reload with out bring down the system. How do reload the spawned program so the next connections are using the updated program while the others finish? For now I am killing the spawned process and running command again. I am hoping for something more graceful.
I tried this by the way.
sudo kill -1 `sudo lsof -t -i:9001
I have recently made something similar for node.js.
The idea is to have index.py as a very simple bootstrap script (which doesnât actually change much over time). It should catch SIGHUP, and reload/reread the application files (which are expected to change frequently).
I stuck on a small problem.
I'm launching many bsub commands at the same time each one on a specified host:
bsub -sp 20 -W 0:5 -m $myhostname -q "myQueue" -J "mkdir_script" -o $log_file "script_to_launch param1 param2 param3"
all this inside a for, for each hostName.
The problem is that everything is OK for all hosts except one (always the same one). The job is always in PENDING state, and is not moving to RUN state.
The script to execute is a script that will check for a folder and creating it if is not there (so a very small task to do).
Is there a way to see what happens on that host and why my job is not going to RUN state ?
PS: I just found the bjobs -p command and I have the following message:
Not specified in job submission: 81 hosts;
Closed by LSF administrator: 3 hosts;
What does this message mean?
The -m option limits you to a particular host, which excludes 81 hosts. The other three have been closed by your system administrator. You would have to contact them to find out why.
I have a shell script that starts an ssh session to a remote host and pipes the output to another, local script, like so:
#!/bin/sh
ssh user#host 'while true ; do get-info ; sleep 1 ; done' | awk -f parse-info.awk
It works fine. I run it under the 'supervise' program from djb's daemontools. The only problem is shutting down the daemon. If I terminate the process for this shell script, the ssh and awk processes continue running as orphans. Normally I would solve this problem with exec to replace the supervising shell process, but the two processes run in their own subshells and can't replace the shell process.
What I would like to do is have the supervising shell script 'forward' any signals it receives to at least one of the child processes, so that I can break the pipe and shut down cleanly. Is there an easy way to do this?
Inter process communications.
You should be looking at pipes, etc.