I am trying to create a filter in tcpdump that will allow me to examine tcp traffic on ports about 1024.
I came up with:sudo tcpdump tcp portrange 1025-65535 but I'm not sure if there is a better way to create the filter.
For example, I tried looking for greater than and less than syntax for port ranges in tcpdump and BPF but haven't managed to find anything.
# tcpdump 'tcp[0:2] > 1024 or tcp[2:2] > 1024'
(Two bytes in TCP header at offset 0 are > 1024, or two bytes at offset 2 are > 1024.)
You can see the BPF filter produced with the -d option:
# tcpdump -d 'tcp[0:2] > 1024 or tcp[2:2] > 1024'
(000) ldh [12]
(001) jeq #0x800 jt 2 jf 12
(002) ldb [23]
(003) jeq #0x6 jt 4 jf 12
(004) ldh [20]
(005) jset #0x1fff jt 12 jf 6
(006) ldxb 4*([14]&0xf)
(007) ldh [x + 14]
(008) jgt #0x400 jt 11 jf 9
(009) ldh [x + 16]
(010) jgt #0x400 jt 11 jf 12
(011) ret #262144
(012) ret #0
It is shorter than the one from the portrange version:
# tcpdump -d tcp portrange 1025-65535
(000) ldh [12]
(001) jeq #0x86dd jt 2 jf 9
(002) ldb [20]
(003) jeq #0x6 jt 4 jf 22
(004) ldh [54]
(005) jge #0x401 jt 6 jf 7
(006) jgt #0xffff jt 7 jf 21
(007) ldh [56]
(008) jge #0x401 jt 20 jf 22
(009) jeq #0x800 jt 10 jf 22
(010) ldb [23]
(011) jeq #0x6 jt 12 jf 22
(012) ldh [20]
(013) jset #0x1fff jt 22 jf 14
(014) ldxb 4*([14]&0xf)
(015) ldh [x + 14]
(016) jge #0x401 jt 17 jf 18
(017) jgt #0xffff jt 18 jf 21
(018) ldh [x + 16]
(019) jge #0x401 jt 20 jf 22
(020) jgt #0xffff jt 22 jf 21
(021) ret #262144
(022) ret #0
The reference for this syntax is the pcap-filter man page.
However, your version remains more readable.
Related
I have a file with multiple columns:
$ cat file.txt
a bc 34 67
t gh 68 -34
f jh -9 76
h in -66 -14
and so on
I am trying to extract when both columns are negative; when both are positive then subtract the two columns based on which value is greater; and if either column negative then add both the columns
For both negative its quite easy:
less file.txt | egrep -i "\-.*\-" | less
Expected Output:
h in -66 -14
For both positive I tried the following to no avail:
less file.txt | egrep -iv "\-.*\-" | awk '($3>$4 {print $0,($3-$4)}) || ($4>$3 {print $0,($4-$3)})' | less
Expected Output:
a bc 34 67 33
For either negative,
less file.txt | egrep -iv "\-.*\-" | awk '($3<0||$4<0) {print $0,($3+$4)}' | less
Expected Output:
t gh 68 -34 34
f jh -9 76 67
I am seeing this error:
awk: cmd. line:1: (FILENAME=- FNR=3208) fatal: print to "standard output" failed (Broken pipe)
egrep: write error
I know its a basic thing to do, any help would be appreciated!
One awk idea:
awk '
$3<0 && $4<0 { print $0 ; next }
$3>0 && $4>0 { print $0, ($3>=$4 ? $3-$4 : $4-$3); next }
{ print $0, $3+$4 }
' file.txt
NOTE: may need to be tweaked if $3==0 and/or $4==0 ... depends on OP's requirements for this scenario
This generates:
a bc 34 67 33
t gh 68 -34 34
f jh -9 76 67
h in -66 -14
Another awk implementation:
awk '
function abs(x) {
if (x < 0) x = -x
return x
}
$3 >= 0 || $4 >= 0 {$(NF+1) = abs(abs($3) - abs($4))}
{print}
' file.txt
a bc 34 67 33
t gh 68 -34 34
f jh -9 76 67
h in -66 -14
If you wanted to do this in plain bash:
abs() { echo $(($1 < 0 ? -($1) : $1)); }
while read -ra fields; do
a=${fields[2]}
b=${fields[3]}
if ((a >= 0 || b >= 0)); then
fields+=($(abs $(($(abs $a) - $(abs $b)))))
fi
echo "${fields[*]}"
done < file.txt
I have compiled Quantum ESPRESSO (Program PWSCF v.6.7MaX) for GPU acceleration (hibrid MPI/OpenMP) with the next options:
module load compiler/intel/2020.1
module load hpc_sdk/20.9
./configure F90=pgf90 CC=pgcc MPIF90=mpif90 --with-cuda=yes --enable-cuda-env-check=no --with-cuda-runtime=11.0 --with-cuda-cc=70 --enable-openmp BLAS_LIBS='-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core'
make -j8 pw
Apparently, the compilation ends succesfully. Then, I execute the program:
export OMP_NUM_THREADS=1
mpirun -n 2 /home/my_user/q-e-gpu-qe-gpu-6.7/bin/pw.x < silverslab32.in > silver4.out
Then, the program starts running and print out the next info:
Parallel version (MPI & OpenMP), running on 8 processor cores
Number of MPI processes: 2
Threads/MPI process: 4
...
GPU acceleration is ACTIVE
...
Estimated max dynamical RAM per process > 13.87 GB
Estimated total dynamical RAM > 27.75 GB
But after 2 minutes of execution the job ends with error:
0: ALLOCATE: 4345479360 bytes requested; status = 2(out of memory)
0: ALLOCATE: 4345482096 bytes requested; status = 2(out of memory)
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[47946,1],1]
Exit code: 127
--------------------------------------------------------------------------
This node has > 180GB of available RAM. I check the Memory use with the top command:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
89681 my_user 20 0 30.1g 3.6g 2.1g R 100.0 1.9 1:39.45 pw.x
89682 my_user 20 0 29.8g 3.2g 2.0g R 100.0 1.7 1:39.30 pw.x
I noticed that the process stops when RES memory reaches 4GB. This are the caracteristics of the node:
(base) [my_user#gpu001]$ numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 28 29 30 31 32 33 34 35 36 37 38 39 40 41
node 0 size: 95313 MB
node 0 free: 41972 MB
node 1 cpus: 14 15 16 17 18 19 20 21 22 23 24 25 26 27 42 43 44 45 46 47 48 49 50 51 52 53 54 55
node 1 size: 96746 MB
node 1 free: 70751 MB
node distances:
node 0 1
0: 10 21
1: 21 10
(base) [my_user#gpu001]$ free -lm
total used free shared buff/cache available
Mem: 192059 2561 112716 260 76781 188505
Low: 192059 79342 112716
High: 0 0 0
Swap: 8191 0 8191
The version of MPI is:
mpirun (Open MPI) 3.1.5
This node is a compute node in a cluster, but no matter if I submit the job with SLURM or run it directly on the node, the error is the same.
Note that I compile it on the login node and run it on this GPU node, the difference is that on the login node it has no GPU connected.
I would really appreciate it if you could help me figure out what could be going on.
Thank you in advance!
I will appreciate any help I can get. It says we have to read in the attached file and use it as an edge list to create a directed graph with weighted edges.
Then there are about 20 other things I have to do from there. This is part of the .txt file to import:
Columns are inbound locations, outbound locations, and travel time in minutes.
Inbound Outbound Minutes
ACY ATL 102
ACY FLL 136
ACY MCO 122
ACY MYR 90
ACY RSW 137
ACY TPA 129
ATL ACY 102
ATL BOS 132
ATL BWI 106
ATL CLE 104
.... and so on, there are probably 50+ locations in total, with around 400 lines
I tried using
read.graph(file.choose(), format="edgelist")
and when I select the .txt file I get the error:
"Error in read.graph.edgelist(file, ...) :
At foreign.c:101 : parsing edgelist file failed, Parse error"
-----EDIT-----
I just used the following code:
inbound <- c(data[, 1])
outbound <- c(data[, 2])
testing <- data.frame(inbound, outbound)
gd <- graph_from_data_frame(testing, directed=TRUE,vertices=NULL)
Which gave this output:
edges from d654854 (vertex names):
[1] 1 ->2 1 ->18 1 ->30 1 ->35 1 ->46 1 ->58 2 ->1 2 ->7 2 ->9 2 ->11 2 ->15 2 ->16 2 ->18 2 ->21 2 ->23 2 ->24 2 ->30
[18] 2 ->33 2 ->34 2 ->36 2 ->37 2 ->41 2 ->58 3 ->18 4 ->18 5 ->18 5 ->30 5 ->35 5 ->46 5 ->58 6 ->18 7 ->2 7 ->9 7 ->11
[35] 7 ->15 7 ->16 7 ->18 7 ->23 7 ->30 7 ->33 7 ->34 7 ->35 7 ->37 7 ->58 8 ->18 9 ->2 9 ->7 9 ->13 9 ->15 9 ->16 9 ->18
[52] 9 ->21 9 ->23 9 ->24 9 ->30 9 ->33 9 ->34 9 ->35 9 ->36 9 ->37 9 ->48 9 ->51 9 ->58 10->18 10->23 10->30 10->35 11->2
[69] 11->7 11->15 11->18 11->23 11->24 11->30 11->34 11->35 11->51 12->18 13->9 13->15 13->18 13->21 13->37 14->15 14->16
[86] 14->18 14->21 14->23 14->24 14->26 14->30 14->33 14->37 15->2 15->7 15->9 15->11 15->13 15->14 15->16 15->18 15->23
[103] 15->24 15->26 15->30 15->33 15->34 15->35 15->36 15->37 15->41 15->42 15->43 15->48 15->52 15->58 16->2 16->7 16->9
[120] 16->14 16->15 16->18 16->21 16->23 16->24 16->26 16->29 16->30 16->33 16->34 16->35 16->36 16->41 16->46 16->54 16->58
+ ... omitted several edges
Is that what I am supposed to get? Or am I still way off?
Using is.igraph(gd) returns true, and using V(gd) and E(gd) both return information.
So I guess my question is how do I properly import the "table" so that the pairs of inbound/outbound flight names are used as edges (I think) for this? I have to make a directed graph with weighted edges to finalize the set up.
Any information on where I should start? I looked through the igraph documentation but I can't find anything about importing from a table and using pairs of characters as edges.
You can import the data as a data.frame and coerce it to a graph. Once you have the graph, you can assign weights.
library(igraph)
xy <- read.table(text = "
ACY ATL 102
ACY FLL 136
ACY MCO 122
ACY MYR 90
ACY RSW 137
ACY TPA 129
ATL ACY 102
ATL BOS 132
ATL BWI 106
ATL CLE 104", header = FALSE, sep = " ")
colnames(xy) <- c("node1", "node2", "weight")
g <- graph_from_data_frame(xy[, c("node1", "node2")])
E(g)$weight <- xy$weight
plot(g, edge.width = E(g)$weight/50, edge.arrow.size = 0.1)
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
The data i am using looks as shown below, it has 50000 instances and 32 variables....
The missing values are present in many varibles ,..
sorry was unable to post the entire data..
I used
library(zoo)
d$V5 <- na.locf(d$V5)
and i further checked for Gini value and it gave me the output as below
Gini(d$V5)
[1] NA
Warning messages:
1: In sum(x * 1:n) : Integer overflow - use sum(as.numeric(.))
2: In n * sum(x) : NAs produced by integer overflow
But d$V5 corresponds to age which is a number
The aim was to find Gini and information gain and to plot a decision tree, due to missing values the decision tree split is one.
Hence, filling missing values was necessary.
Data:
1 022 F O 044 0 N 31 12 00P 0012 Y Y N Y 0048 731 0.000000 Y N 0 VERA LUCIA N N 300.000000 0000 00 N 0
2 015 F S 018 0 Y 31 20 00 P 0216 Y Y Y Y 0012 853 0.000000 Y N 0 SARA FELIPE N N 300.000000 0000 00 N 0
3 024 F C 022 0 Y 31 08 00 P 0048 Y N Y Y 0012 040 0.000000 Y N 0 HELENA DOMINGOS SOGRA N N 229.000000 0000 00 N 0
4 012 F C 047 0 N 31 25 00 P 0180 Y Y N Y 0024 035 0.000000 Y N 0 JACI VALERIA ALEXANDRA TRAJANO N N 304.000000 0000 00 N 0
5 016 F S 028 0 Y 31 25 00 O 0012 Y Y Y Y 0012 024 0.000000 Y N 0 MARCIA CRISTINA ZANELLA SANDRO L P MARTINS N N 250.000000 0000 00 N 0
.....
49998 023 F S 023 0 Y 31 28 00 P 0264 Y Y Y Y 0012 991 0.000000 Y N 0 NOVINA GLAUCIA N N 240.000000 0000 00 N 1
49999 009 F C 038 0 Y 5 28 00 P 0048 Y Y Y Y 0204 040 0.000000 Y N 0 LILIANE FIGUEIREDO MIRNA CARVALHO NASCIMENTO N N 616.000000 0000 00 N 0
50000 022 M S 029 0 Y 31 23 00 P 0048 Y Y N Y 0036 026 0.000000 Y N 0 TITO MARTINS N N 341.000000 0000 00 N 0
The error you're getting has nothing to do with missing values (which may or may not present a problem of their own). It can easily be reproduced by doing:
sum(1:100000)
#[1] NA
#Warning message:
#In sum(1:1e+05) : integer overflow - use sum(as.numeric(.))
And can also be avoided by converting to doubles:
sum(as.numeric(1:100000))
#[1] 5000050000
So do
d$V5 = as.numeric(d$V5)
and take it from there.
3 ppl (A B C) are connected to a local server (S1) through SSH (putty, or Unix console), with the same username (foobar). Is there a way to associate their own IP to the pts they create ?
For example, a command witch display that :
S1:/root# ls -l /dev/pts
crw------- 1 foobar tty 136, 0 16 apr 10:34 0 <-> 192.168.0.A
crw------- 1 foobar tty 136, 2 16 apr 10:22 2 <-> 192.168.0.B
crw------- 1 foobar tty 136, 3 16 apr 09:26 3 <-> 192.168.0.A
crw------- 1 foobar tty 136, 5 16 apr 09:26 5 <-> 192.168.0.C
Thanks !
"who" command shows you the association between pts-s and hostnames (or ip-s if there is no hostname). You can change the hostnames to IP using 'host' command (if this is a requirement for you).