I am trying to change the tcp congestion control on my Centos 7.
I checked what algorithm with:
cat /proc/sys/net/ipv4/tcp_congestion_control
cubic
I want to change to htcp but when I check if it is available:
ls /lib/modules/`uname -r`/kernel/net/ipv4/
ah4.ko inet_diag.ko ipip.ko netfilter udp_diag.ko xfrm4_mode_tunnel.ko
esp4.ko ipcomp.ko ip_tunnel.ko tcp_diag.ko xfrm4_mode_beet.ko xfrm4_tunnel.ko
gre.ko ip_gre.ko ip_vti.ko tunnel4.ko xfrm4_mode_transport.ko
So, first I didn't see the CUBIC and neither HTCP. How do I enable the HTCP congestion control.
Maybe a little bit too late, but you can change the congestion control from cubic to htcp with:
# sysctl -w net.ipv4.tcp_congestion_control=htcp
You may also check which congestion controls are allowed in your system with:
# sysctl net.ipv4.tcp_allowed_congestion_control
If you want to see which ones are available:
# sysctl net.ipv4.tcp_available_congestion_control
Related
I have been told that one of my servers intermittently throws ZeroWindow errors. I would like to monitor this in Prometheus.
If I run neststat -s some of the results are:
netstat -s
Ip:
...
IcmpMsg:
...
Tcp:
...
TcpExt:
TCPFromZeroWindowAdv: 96
TCPToZeroWindowAdv: 96
TCPWantZeroWindowAdv: 16
It is very difficult to find a definition for this the closest that I have found is:
WantZeroWindowAdv: +1 each time window size of a sock is 0
ToZeroWindowAdv: +1 each time window size of a sock dropped to 0
FromZeroWindowAdv: +1 each time window size of a sock increased from 0
Reading this I believe that WantZeroWindowAdv show the ZeroWindow problems. (It counts each time that a socket is requested its window size and responds with 0.)
Not part of the question - then I would need to add this to nodes_netstat.go for prometheus.
Am I correct - is this approach valid? Netstat is not highly documented.
Your descriptions of "To" and "From" are correct.
"Want" is when TCP would have liked to have sent a zero window back to a sender, but couldn't because that would have implied a shrinking of the window rather than it being full.
I'm trying to profile an existing application with a quite complicated structure. For now I am using perf_event_open and the needed ioctl calls for enabling the events which are of my interest.
The manpage stays that PERF_COUNT_HW_INSTRUCTIONS should be used carefully - so which one should be preferred in case of a Skylake processor? Maybe a specific Intel PMU?
The perf_event_open manpage http://man7.org/linux/man-pages/man2/perf_event_open.2.html
says about PERF_COUNT_HW_INSTRUCTIONS:
PERF_COUNT_HW_INSTRUCTIONS Retired instructions. Be careful, these can be affected by various issues, most notably hardware interrupt counts.
I think this means that COUNT_HW_INSTRUCTIONS can be used (and it is supported almost everywhere). But exact values of COUNT_HW_INSTRUCTIONS for some code fragment may be slightly different in several runs due to noise from interrupts or another logic.
So it is safe to use events PERF_COUNT_HW_INSTRUCTIONS and PERF_COUNT_HW_CPU_CYCLES on most CPU. perf_events subsystem in Linux kernel will map COUNT_HW_CPU_CYCLES to some raw events more suitable to currently used CPU and its PMU.
Depending on your goals you should try to get some statistics on PERF_COUNT_HW_INSTRUCTIONS values for your code fragment. You can also check stability of this counter with several runs of perf stat with some simple program:
perf stat -e cycles:u,instructions:u /bin/echo 123
perf stat -e cycles:u,instructions:u /bin/echo 123
perf stat -e cycles:u,instructions:u /bin/echo 123
Or use integrated repeat function of perf stat:
perf stat --repeat 10 -e cycles:u,instructions:u /bin/echo 123
I have +-10 instructions events variation (less than 0.1%) for 200 thousands total instructions executed, so it is very stable. For cycles I have 5% variation, so it should be cycles event marked with careful warning.
I want to visualize memory mapping states of processes. For this I parsed the output of
# strace -s 256 -v -k -f -e trace=memory,process command
and now I have a time series of disjoint sums of intervals on the real line. Is there a convenient visualization library for such data? Haskell interface would be the most time-saving for me, but any suggestion is welcome. Thanks!
Just in case this might be useful for anyone, I hacked up a little tool to do this. (By the way I ended up using R/Shiny for interactive visualization.)
Here's the github repo.
It's interactive in that if you click a region, the stack traces responsible for the memory mapping
will be shown like this:
trace:
22695 mmap(NULL, 251658240, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x2b4210000000
/lib/x86_64-linux-gnu/libc-2.19.so(mmap64+0xa) [0xf487a]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(_ZN2os17pd_reserve_memoryEmPcm+0x31) [0x91e9c1]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(_ZN2os14reserve_memoryEmPcm+0x20) [0x91ced0]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(_ZN13ReservedSpace10initializeEmmbPcmb+0x256) [0xac20a6]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(_ZN17ReservedCodeSpaceC1Emmb+0x2c) [0xac270c]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(_ZN8CodeHeap7reserveEmmm+0xa5) [0x61a3c5]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(_ZN9CodeCache10initializeEv+0x80) [0x47ff50]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(_Z12init_globalsv+0x45) [0x63c905]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(_ZN7Threads9create_vmEP14JavaVMInitArgsPb+0x23e) [0xa719be]
/usr/lib/jvm/java-8-oracle/jre/lib/amd64/server/libjvm.so(JNI_CreateJavaVM+0x74) [0x6d11c4]
/usr/lib/jvm/java-8-oracle/lib/amd64/jli/libjli.so(JavaMain+0x9e) [0x745e]
/lib/x86_64-linux-gnu/libpthread-2.19.so(start_thread+0xc4) [0x8184]
/lib/x86_64-linux-gnu/libc-2.19.so(clone+0x6d) [0xfa37d]
The same colors correspond to the same flags for mmap/msync/madvise etc.
Synopsis
$ make show-prerequisites
# (Follow the instructions)
$ make COMMAND="time ls"
...
DATA_DIR=build/data-2016-12-12_02h38m13s
Listening on http://127.0.0.1:5000
....
$ firefox http://127.0.0.1:5000
$ # Re-browse the previous results
$ make DATA_DIR=build/data-2016-12-12_02h38m13s
In the process of development I realized the striking geometricity of the problem.
So I created a module called Sheaf and described there a recipe for defining a Grothendieck
topology and a constant sheaf on it. It now seems the Grothendieck (or Lawvere-Tierney) topologies
are actually ubiquitous for programming.. but I'm not sure if it will prove anything worthy.
So feel free to check it!
I am running a job on a Sun Grid Engine (now known as Oracle Grid Engine) cluster. To see whether my job is slowing down because the node is overloaded, I tried to check the status of the node:
$ qstat -l hostname=hnode03 -f
queuename qtype resv/used/tot. load_avg arch states
---------------------------------------------------------------------------------
all.q#hnode03.rnd.mycorp.com BP 0/0/0 103.41 lx24-amd64
---------------------------------------------------------------------------------
highmem.q#hnode03.rnd.mycorp BP 0/37/40 103.41 lx24-amd64
977530 0.76963 runJob1 userme r 09/13/2013 17:53:26 2
---------------------------------------------------------------------------------
threaded.q#hnode03.rnd.mycor BP 0/24/32 103.41 lx24-amd64
---------------------------------------------------------------------------------
workflow.q#hnode03.rnd.mycor B 0/0/0 103.41 lx24-amd64
and
$ qhost -h hnode03
HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO SWAPUS
-------------------------------------------------------------------------------
global - - - - - - -
hnode03 lx24-amd64 64 103.4 504.8G 122.9G 16.0G 58.0M
Now, the load_avg is 103.41, while the NCPU is only 64. Is this ever supposed to happen? Are some jobs using CPU than the slots they are assigned?
Update: In response to queries, the configurations are uploaded to http://pastebin.com/hLnJBetS.
Yes, it can.
Slots are not synonymous of cores (NCPU). Slots must be seen as "how many jobs can be scheduled in parallel on a node."
If you only want one job to be ran at once, set the slots count for your machines to one.
For the load factor, even if your job only uses one slot, if you have too many threads or subprocesses, then all the cores will be used and the load factor will definitely go above 1.
Straight to the point; I'd like to create a script in UNIX to open two windows at a specific location on the screen, enter in username and password (which would be given by the user as an argument) and then execute another script.
I'd like to know if this is possible and if so; where should I look?
I'm new to UNIX, but am quite familiar with scripting and programming.
EDIT after thb and notfed responded
I am currently running SunOS 5.6 on OS X
Regarding the location of the windows, review X(7) -- that is, type the command man 7 X at the terminal and review the result -- and scroll down that man page to the section GEOMETRY SPECIFICATIONS, if your version of the man page has such a section. In brief, to have the program xfoo open its window with a size of 200 horizontally and 160 vertically, with its upper left corner at coordinates (40, 100), give the command xfoo -geometry 200x160+40+100.
This may not be a complete answer to your question, since you prudently have not given full details, but one suspects that it will set you on the right track, so to speak.
Regarding your authentication question, the crypt(3) manpage and its SEE ALSO section might help. For more advanced handling, see Libpam, where PAM stands for Pluggable Authentication Modules.
If the two windows are Xterms, then there's an easy way to do this;
Note this is from a ksh script, but should work in bash
## This is how big (how many columns & rows) your Xterm will be
GEOM0=198x20
GEOM1=98x45
## Colors
COLOR="-bg black -fg white -cr red"
## Xterm Options (See Man page)
XOPTS="+ah +ai -b 2 -cb +cn -j -ls -rw -aw -si +sk"
## Scroll-back Buffer
SCRLB="9999"
## Commands to execute (you could put an SSH command here)
CMD0="-e /bin/gtail -F /var/adm/messages"
CMD1="-e /bin/gtail -F /var/log/secure"
TITLE="-title Something Clever Here "
TERMBIN=/usr/openwin/bin/xterm
# PLACEMENT
# We specify where we want the window to pop-up by adding "+#+#" to the GEOM.
################################################################################
## Top-Left corner (For my monitor, that's "+2+2")
${TERMBIN} ${XOPTS} -sl ${SCRLB} -fn 6x10 ${COLOR} -geometry ${GEOM0}+2+2 ${TITLE} ${CMD0} &
## Top-Center (For my monitor, that's "+2+233")
${TERMBIN} ${XOPTS} -sl ${SCRLB} -fn 6x10 ${COLOR} -geometry ${GEOM1}+2+233 ${TITLE} ${CMD1} &