How to know the size of a packet - networking

Suppose I start a TCP session and close it after some times, now how can I know how much or the size of packets that have been used in the overall session?

Adding to above : Tshark is a tool to sniff packets on the linux machine:
Here is an example :
cmd : tshark -n -T fields -e ip.src -e tcp.seq -e tcp.len -i
ip.src = source ip |
tcp-seq = sequence |
**tcp.len = lenght of tcl packets**
Here is the snapshot: third column is the length of tcp packets
198.252.206.140 14766 117
192.168.1.2 2583 0
192.168.1.2 2583 632
190.93.245.58 1 679
192.168.1.2 522 0
198.252.206.140 0 0
192.168.1.2 1 0
198.252.206.140 1 1440
192.168.1.2 580 0
198.252.206.140 1441 1283
192.168.1.2 580 0
198.252.206.140 14883 145
192.168.1.2 1 556
192.168.1.2 3215 0
192.168.1.2 522 564
190.93.245.58 680 0
190.93.245.58 680 1440
192.168.1.2 1086 0
190.93.245.58 2120 1095
192.168.1.2 1086 0
198.252.206.17 1 1440
^C192.168.1.2 557 0
198.252.206.17 1441 208
192.168.1.2 557 0
192.168.1.2 557 585
192.168.1.2 1086 607
190.93.245.58 3215 343
192.168.1.2 1693 0
198.252.206.17 1649 270

You'll have to use a packet sniffer like WireShark. Even if you have very structured data that sends out at sufficiently delayed intervals it's not really possible to pre-compute how that data will be sent. There's too many variables over which you have minimal, or no, control.

Related

Find groups of thousands which sum to a given number, in lexical order

A large number can be comma formatted to read more easily into groups of three. E.g. 1050 = 1,050 and 10200 = 10,200.
The sum of each of these groups of three would be:
1050=1,050 gives: 1+50=51
10200=10,200 gives: 10+200=210
I need to search for matches in the sum of the groups of threes.
Namely, if I am searching for 1234, then I am looking for numbers whose sum of threes = 1234.
The smallest match is 235,999 since 235+999=1234. No other integer less than 235,999 gives a sum of threes equal to 1234.
The next smallest match is 236,998 since 236+998=1234.
One can add 999 each time, but this fails after reaching 999 since an extra digit of 1 is added to the number due to overflow in the 999.
More generally, I am asking for the solutions (smallest to highest) to:
a+b+c+d… = x
where a,b,c,d… is an arbitrary number of integers between 0-999 and x
is a fixed integer
Note there are infinite solutions to this for any positive integer x.
How would one get the solutions to this beginning with the smallest number solutions (for y number of solutions where y can be an arbitrarily large number)?
Is there a way to do this without brute force looping one by one? I'm dealing with potentially very large numbers, which could take years to loop through in a straight loop. Ideally, one should do this without failed attempts.
The problem is easier to think about if instead of groups of 3 digits, you just consider 1 digit at a time.
An algorithm:
Start by filling the 0 digit group with x.
Create a loop that each time prints the next solution.
"Normalize" the groups by moving all that is too large from the right to the left, leaving only the maximum value at the right.
Output the solution
Repeat:
Add 1 to the penultimate group
This can carry to the left if a group gets too large (e.g.999+1 is too large)
Check whether the result didn't get too large (a[0] should be able to absorb what was added)
If the result got too large, set the group to zero and continue incrementing the earlier groups
Calculate the last group to absorb the surplus (can be positive or negative)
Some Python code for illustration:
x = 1234
grouping = 3
max_iterations = 200
max_in_group = 10**grouping - 1
a = [x]
while max_iterations > 0:
#step 1: while a[0] is too large: redistribute to the left
i = 0
while a[i] > max_in_group:
if i == len(a) - 1:
a.append(0)
a[i + 1] += a[i] - max_in_group
a[i] = max_in_group
i += 1
num = sum(10**(grouping*i) * a[i] for i, n in enumerate(a))
print(f"{num} {num:,}")
# print("".join([str(t) for t in a[::-1]]), ",".join([str(t) for t in a[::-1]]))
# step 2: add one to the penultimate group, while group already full: set to 0 and increment the
# group left of it;
# while the surplus is too large (because a[0] is too small) repeat the incrementing
i0 = 1
surplus = 0
while True: # needs to be executed at least once, and repeated if the surplus became too large
i = i0
while True: # increment a[i] by 1, which can carry to the left
if i == len(a):
a.append(1)
surplus += 1
break
else:
if a[i] == max_in_group:
a[i] = 0
surplus -= max_in_group
i += 1
else:
a[i] += 1
surplus += 1
break
if a[0] >= surplus:
break
else:
surplus -= a[i0]
a[i0] = 0
i0 += 1
#step 3: a[0] should absorb the surplus created in step 1, although a[0] can get out of bounds
a[0] -= surplus
surplus = 0
max_iterations -= 1
Abbreviated output:
235,999 236,998 ... 998,236 999,235 ... 1,234,999 1,235,998 ... 1,998,235 1,999,234 2,233,999 2,234,998 ...
Output for grouping=3 and x=3456:
459,999,999,999 460,998,999,999 460,999,998,999 460,999,999,998 461,997,999,999
461,998,998,999 461,998,999,998 461,999,997,999 461,999,998,998 461,999,999,997
462,996,999,999 ...
Output for grouping=1 and x=16:
79 88 97 169 178 187 196 259 268 277 286 295 349 358 367 376 385 394 439 448 457 466
475 484 493 529 538 547 556 565 574 583 592 619 628 637 646 655 664 673 682 691 709
718 727 736 745 754 763 772 781 790 808 817 826 835 844 853 862 871 880 907 916 925
934 943 952 961 970 1069 1078 1087 1096 1159 1168 1177 1186 1195 1249 1258 1267 1276
1285 1294 1339 1348 1357 1366 1375 1384 1393 1429 1438 1447 1456 1465 1474 1483 1492
1519 1528 1537 1546 1555 1564 1573 1582 1591 1609 1618 1627 1636 1645 1654 1663 1672
1681 1690 1708 1717 1726 1735 1744 1753 1762 1771 1780 1807 1816 1825 1834 1843 1852
1861 1870 1906 1915 1924 1933 1942 1951 1960 2059 2068 2077 2086 2095 2149 2158 2167
2176 2185 2194 2239 2248 2257 2266 2275 2284 2293 2329 2338 2347 2356 2365 2374 2383
2392 2419 2428 2437 2446 2455 2464 2473 2482 2491 2509 2518 2527 2536 2545 2554 2563
2572 2581 2590 2608 2617 2626 2635 2644 2653 2662 2671 2680 2707 2716 2725 2734 ...

dummy_cols Error: vector memory exhausted (limit reached?)

I am attempting to create dummy variables based on a factor variable with more than 200 factor levels. The data has more than 15 million observations. Using the "fastDummies" package, I am using the "dummy_cols" command to convert the factor variable to dummies, and remove the first.
I have read numerous posts on the issue. Several suggest subsetting the data, which I cannot do. This analysis is for a school assignment that requires that I use all included data.
I am using a 16GB RAM Macbook Pro with the 64-bit version of RStudio. The instructions in the post below dictate how to increase the max RAM available to R. However, the instructions seem to imply that I am already at maximum capacity, or that it may be unsafe for my machine to attempt to raise the memory restriction on R.
R on MacOS Error: vector memory exhausted (limit reached?)
I'm not sure how to go about posting 15 million rows of data. The following code shows the unique factor levels for the variable in question:
unique(housing$city)
[1] 40 80 160 200 220 240 280 320 440 450 460 520 560 600 640
[16] 680 720 760 840 860 870 880 920 960 1000 1040 1080 1120 1121 1122
[31] 1150 1160 1240 1280 1320 1360 1400 1440 1520 1560 1600 1602 1620 1640 1680
[46] 1720 1740 1760 1840 1880 1920 1950 1960 2000 2020 2040 2080 2120 2160 2240
[61] 2290 2310 2320 2360 2400 2560 2580 2640 2670 2680 2700 2760 2840 2900 2920
[76] 3000 3060 3080 3120 3160 3180 3200 3240 3280 3290 3360 3480 3520 3560 3590
[91] 3600 3620 3660 3680 3710 3720 3760 3800 3810 3840 3880 3920 3980 4000 4040
[106] 4120 4280 4320 4360 4400 4420 4480 4520 4600 4680 4720 4800 4880 4890 4900
[121] 4920 5000 5080 5120 5160 5170 5200 5240 5280 5360 5400 5560 5600 5601 5602
[136] 5603 5605 5790 5800 5880 5910 5920 5960 6080 6120 6160 6200 6280 6440 6520
[151] 6560 6600 6640 6680 6690 6720 6740 6760 6780 6800 6840 6880 6920 6960 6980
[166] 7040 7080 7120 7160 7240 7320 7360 7362 7400 7470 7480 7500 7510 7520 7560
[181] 7600 7610 7620 7680 7800 7840 7880 7920 8000 8040 8050 8120 8160 8200 8280
[196] 8320 8400 8480 8520 8560 8600 8640 8680 8730 8760 8780 8800 8840 8880 8920
[211] 8940 8960 9040 9080 9140 9160 9200 9240 9260 9280 9320
I used the following commands to create the dummy variables, based on the fastDummies package:
library(fastDummies)
housing <- dummy_cols(housing, select_columns = "city", remove_first_dummy = TRUE)
I get the following response:
Error: vector memory exhausted (limit reached?)
I am, again, trying to create 220 dummies based on the 221 levels (excluding the first to avoid issues of perfect collinearity in analyses).
Any help is most welcome. If I am missing something about the preceding suggestions, my apologies; none of them involved the exact issue I am experiencing (in the context of creating dummies) and I am not very proficient in use of the command line in Mac OS.
An update: I used the method in R on MacOS Error: vector memory exhausted (limit reached?) of using Terminal to remove the default memory usage cap allotted to R, and was able to perform the operations I needed to (although they took an extremely long time).
However, I am still concerned that this may be problematic for computing power. Could removing these default limits on memory used by R damage my computer? Activity Monitor says that my rsession is using almost 48 GB of RAM when I only have physical memory on board of 16 GB.
I understand that I may be walking the line between a coding and a software question here, but the two in this case are related.

Port forwarding: docker -> vagrant -> host

I have a docker machine with ftp service running into a vagrant machine, the vagrant machine is running into a macos host. The docker machine ftp service is accessible from vagrant machine via ftp localhost, but how can I expose it to the mac host?. The Mac -> Vagrant network is NATS, so I made a port forwarding like 21:21 between Mac host and Vagrant but, at host, I make ftp localhost and it doesn't work. :'( what am I doing wrong?
This is part of the output of ps aux in the vagrant machine:
root 7841 0.0 0.5 113612 8948 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1108 -container-ip 172.17.0.1 -container-port 1108
root 7849 0.0 0.6 121808 10176 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1107 -container-ip 172.17.0.1 -container-port 1107
root 7857 0.0 0.7 154592 11212 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1106 -container-ip 172.17.0.1 -container-port 1106
root 7869 0.0 0.7 154592 12212 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1105 -container-ip 172.17.0.1 -container-port 1105
root 7881 0.0 0.6 113612 10172 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1104 -container-ip 172.17.0.1 -container-port 1104
root 7888 0.0 0.7 162788 11192 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1103 -container-ip 172.17.0.1 -container-port 1103
root 7901 0.0 0.6 121808 10156 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1102 -container-ip 172.17.0.1 -container-port 1102
root 7909 0.0 0.6 154592 9216 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1101 -container-ip 172.17.0.1 -container-port 1101
root 7921 0.0 0.5 121808 9196 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 1100 -container-ip 172.17.0.1 -container-port 1100
root 7929 0.0 0.7 162788 12244 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 21 -container-ip 172.17.0.1 -container-port 21
root 7942 0.0 0.5 121808 8936 ? Sl 12:35 0:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 20 -container-ip 172.17.0.1 -container-port 20
message+ 7961 0.0 0.3 111224 5248 ? Ss 12:35 0:00 proftpd: (accepting connections)

High RAM usage by NGINX

There are 6 NGINX processes in the server. Ever since NGINX is started, the RES/VIRT values kept growing until it is out of memory. Is it indicating there is a memory leak?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1941 root 20 0 621m 17m 4144 S 290.4 0.1 8415:03 mongod
16383 nobody 20 0 1675m 1.6g 724 S 21.0 5.2 13:19.30 nginx
16382 nobody 20 0 1671m 1.6g 724 S 17.2 5.1 13:21.39 nginx
16381 nobody 20 0 1674m 1.6g 724 S 15.3 5.1 13:28.45 nginx
16380 nobody 20 0 1683m 1.6g 724 S 13.4 5.2 13:24.77 nginx
16384 nobody 20 0 1674m 1.6g 724 S 13.4 5.1 13:19.83 nginx
16385 nobody 20 0 1685m 1.6g 724 S 13.4 5.2 13:25.00 nginx
Try look on this ngx_http_limit_conn_module nginx module.
Also take a look to client_max_body_size

IIS 6.0 requests slowdown

I have a webapplicaton on IIS 6.0. It constantly processes huge amount of short-time requests (15-30 ms process time). When there comes some (1-10) long-time requests all short-time requests slow down (up to 2000-6000 ms process time and more than 100000 for some of them).
Should there be like an isolation between requests in IIS? It isn't supposed that one requests should not interrupt another?
In IIS logs it is look like:
[Normal work]
cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
192.168.1.7 200 0 0 2394 524 734
192.168.1.7 200 0 0 2394 524 0
192.168.1.7 200 0 0 2394 524 0
192.168.1.7 200 0 0 2394 524 15
192.168.1.7 200 0 0 2394 524 15
192.168.1.7 200 0 0 2394 524 0
192.168.1.7 200 0 0 2394 524 0
192.168.1.7 200 0 0 2394 524 15
192.168.1.7 200 0 0 2394 524 46
[Slowdown]
cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
192.168.1.7 200 0 64 0 522 508251
192.168.1.7 200 0 64 0 522 91827
192.168.1.7 200 0 64 0 522 386438
192.168.1.7 200 0 64 0 522 445947
192.168.1.7 200 0 0 178 522 35545
192.168.1.7 200 0 64 0 522 274130
sc-win32-status 64 means "The specified network is no longer available" but there was no disconnections.
I tried to tune IIS up with tools like IISTuner (http://iistuner.codeplex.com/) it causes no effect.
Why such situation happens?
How to troubleshoot that?
Looks like all the troubles were in appltication itself.
We used ASP.NET form with DataGridView on it (pages were formed on server-side). At the time long-running request comes server has to process it and load data into its memory - so that just blocks other activity.
We rewrite application using ASP.NET MVC (client-side pages) and the trouble was gone.

Resources