Bind query resolution time in munin - munin

Is it possible to graph the query resolution time of bind9 in munin?
I know there is a way to graph it in a unbound server, is it already done in bind? If not how do I start writing a munin plugin for that? I'm getting stats from http://127.0.0.1:8053/ in the bind9 server.

I don't believe that "query time" is a function of BIND. About the only time that I see that value (with individual lookups) is when using dig. If you're willing to use that, the following might be a good starting point:
#!/bin/sh
case $1 in
config)
cat <<'EOM'
graph_title Red Hat Query Time
graph_vlabel time
time.label msec
EOM
exit 0;;
esac
echo -n "time.value "
dig www.redhat.com|grep Query|cut -d':' -f2|cut -d\ -f2
Note that there's two spaces after the "-d\" in the second cut statement. If you save the above as "querytime" and run it at the command line, output should look something like:
root#pi1:~# ./querytime
time.value 189
root#pi1:~# ./querytime config
graph_title Red Hat Query Time
graph_vlabel time
time.label msec
I'm not sure of the value in tracking the above though. The response time can be affected: if the query is an initial lookup, if the answer is cached locally, depending on server load, depending on intervening network congestion, etc.
Note: the above may be a bit buggy as I've written it on the fly, but it should give you a good starting point. That it returned the above output is a good sign.
In any case, recommend reading the following before you write your own: http://munin-monitoring.org/wiki/HowToWritePlugins

Related

why "netstat -a" do not exit immediately but "netstat -n" does?

I have checked about the function of "-n" --
"Displays active TCP connections, however, addresses and port numbers are expressed numerically and no attempt is made to determine names."
But I can't see why "-n" can make netstat exit immediately?
From a quick check, I don't see the same description for the "-n" option as you do, and it doesn't make netstat run continuously.
As you didn't specify the version and exact command you are using, I tried both the version that comes with RH7.6 (net-tools 2.10-alpha) and the latest from source code (net-tools 3.14-alpha). The net-tools source code can be found in github [1].
As I couldn't find the exact option you describe, I tried all flags (without combinations) that don't require an argument. As far as I can tell the only options that cause netstat to not exit immediately are '-g' and '-c'. '-c' makes sense as it is the flag for running netstat continuously. For '-g' it isn't as obvious as the continuous behavior is coming from reading the /proc/net/igmp and /proc/net/igmp6 files line-by-line. The first file is read quickly but the igmp6 file takes much longer (1 line per ~1 sec). The '-g' option isn't really continuous, but just takes a lot of time to finish.
From the code, the only reason for continuous execution is (appears 4 times in the code):
if (i || !flag_cnt)
break;
wait_continous();
'i' is a return code from a function and the 'break' command is to break from an infinite for loop, so basically the code will run continuously only if flag_cnt is set (only happens when '-c' is provided) and there were no errors with previous commands.
For the specific issue above there could be a few reasons:
The option involves reading from a file and it takes very long time to finish, but it is not really continuous.
There's a correlation between the given option and flag_cnt, which cause flag_cnt to be set.
There's a call to wait_continous() which doesn't follow the condition above.
As I said, I couldn't reproduce the issue in the original question, nor could I find any flag with the description above. Also, non of the flags besides '-c' caused netstat to run continuously.
If you still want to figure this out I suggest you take a look at your code, or at least specify the net-tools version you use. The kernel version is also important as some code would be compiled-out due to missing kernel support.
[1] https://github.com/ecki/net-tools

Making a TIME SERIES graph THAT SCROLLS WITH THE DATA in R

I need to do exactly the graph that has been made on this page (the second one)
http://www.animatedgraphs.co.uk/line.html
Here is my actual code :
timemax<-151737 # number of frames (and observations - so no interpolation needed)
setwd("C:/Users/victo/Downloads/ffmpeg/ffmpeg/bin/")
vis<-100 # how many time points are on the screen at one time
gdata<-data.frame('Temps'= data$time,'RH_Xacc'= data$RH_Xacc)
gname<-paste("g",1:timemax,".tif", sep="") # holds the names of the picture files
right<-(((1:timemax)<=vis)*100)+(((1:timemax)>vis)*1:timemax) # rightmost time on screen
left<- right-vis+2 # leftmost time on screen
leftlab<-200*ceiling((left-1)/200) # leftmost x label
rightlab<-200*floor(right/200) # rightmost x label
# draw graphs
for (i in 1:timemax) {
tiff(gname[i],width=480)
plot(gdata$Temps[right[i]:left[i]],gdata$RH_Xacc[right[i]:left[i]],col="red",type="l",ylim=c(-100,200),xlim=c(right[i],left[i]),xaxt="n",ylab="",xlab="time")
axis(1,at=seq(from=rightlab[i],to=leftlab[i],by=12))
lines(gdata$Temps[right[i]:i],gdata$RH_Xacc[right[i]:i])
dev.off(dev.cur())
}
# call FFMPEG and make the video
shell("C:/Users/victo/Downloads/ffmpeg/ffmpeg/bin/ffmpeg.exe -codecs -i g%d.tif -b:v 2048k gdata.mpg",mustWork=FALSE)
My code seems to work until the shell function. I don't get an error. The code just never stops running. I am not able to get the video with all the data. Can someone tell me where is the problem ? How can I get the video or if there is an other to get the same result ? I've tried the library gganimate but i didn't succeed either... I am doing this in Rstudio and using sweave.
Thanks a lot
Actually, the first thing you should do is probably run the command :
C:/Users/victo/Downloads/ffmpeg/ffmpeg/bin/ffmpeg.exe -codecs -i g%d.tif -b:v 2048k gdata.mpg directly and not with a calling shell from RStudio. You would have a more verbose output.
Original post
Do you have any CPU performance profiling tool ? I suspect that the shell command might take a lot of time to run on your computer since in your example : timemax<-151737 whereas the example has a shorter timemax timemax<-1000
Try your program with a low value of timemax (=1000) and time the execution of the code. I guess you could extrapolate the total time needed to execute multiplying by 150 (I am not an expert of ffmpeg, it might actually be longer)

how to get arc diff on the command line to assume yes on very large number of changes

My Jenkins server is running arc diff, and once in a while I have large diffs, I don't want my job to fail if that is the case:
Right with the latest master of arc, I get:
This diff has a very large number of changes (762). Differential works
best for changes which will receive detailed human review, and not as
well for large automated changes or bulk checkins. See
https://secure.phabricator.com/book/phabricator/article/differential_large_changes/
for information about reviewing big checkins. Continue anyway? [y/N]
[1mUsage Exception:[m Aborted generation of gigantic diff.
Build step 'Execute shell' marked build as failure
My current code tries to avoid interactivity and mostly works, except for large diffs. Any way around this?
echo "jenkins
Summary:
Test Plan:
required
Reviewers:
alberto56
Subscribers:
JIRA Issues:
$JIRAISSUE" > arc_info.txt
arc diff --allow-untracked --message jenkins --message-file arc_info.txt origin/master
rm arc_info.txt
There is no interaction option (yet) for arc diff. You may wanna try something like:
echo 'y' | arc diff ...
or even
echo 'y y y' | arc diff ...
You could also use the Yes command: http://linux.die.net/man/1/yes

Pipe file to stdin and keep alive, waiting for new data

A semi-newbie to UNIX piping, so apologies if I'm asking anything obvious here. I'm using a program called CCExtractor to grab the closed captions from a video file. It has the option to receive a file from stdin, and it works great if I do the following:
./ccextractor -stdin < myvideofile.wtv
However, I want to try using it "live" - as a video is recording, it'll transcribe the subtitles. From my understanding, < won't do that, as it'll stop as soon as it reaches the current end of the file. Following this answer on Stack Overflow, it seems like:
tail -c +1 -f myvideofile.wtv | ./ccextractor -stdin
should work - but it doesn't process any part of the video at all (it should, at least, work as well as the previous command and parse the existing data). I figured I'd take a step back and use a simple cat:
cat myvideofile.wtv | ./ccextractor -stdin
and that doesn't work either. I was of the belief that the first and third commands ought to be roughly equivalent, but that's obviously not the case. What are the differences, and how could I get this to work?

Unix SQLLDR scipt gives 'Unexpected End of File' error

All, I am running the following script to load the data on to the Oracle Server using unix box and sqlldr. Earlier it gave me an error saying sqlldr: command not found. I added "SQLPLUS < EOF", it still gives me an error for unexpected end of file syntax error on line 12 but it is only 11 line of code. What seems to be the problem according to you.
#!/bin/bash
FILES='ls *.txt'
CTL='/blah/blah1/blah2/name/filename.ctl'
for f in $FILES
do
cat $CTL | sed "s/:FILE/$f/g" >$f.ctl
sqlplus ID/'PASSWORD'#SERVERNAME << EOF sqlldr SCHEMA_NAME/SCHEMA_PASSWORD control=$f.ctl data=$f EOF
done
sqlplus will never know what to do with the command sqlldr. They are two complementary cmd-line utilities for interfacing with Oracle DB.
Note NO sqlplus or EOF etc required to load data into a schema:
#!/bin/bash
#you dont want this FILES='ls *.txt'
CTL_PATH=/blah/blah1/blah2/name/'
CTL_FILE="$CTL_PATH/filename.ctl"
SCHEMA_NM=SCHEMA_NAME
SCHEMA_PSWD=SCHEMA_PASSWORD
for f in *.txt
do
# don't need cat! cat $CTL | sed "s/:FILE/$f/g" >"$f".ctl
sed "s/:FILE/$f/g" "$CTL_FILE" > "$CTL_PATH/$f.ctl"
#myBad sqlldr "$SCHEMA_NAME/$SCHEMA_PASSWORD" control="$CTL_PATH/$f.ctl" data="$f"
sqlldr $SCHEMA_USER/$SCHEMA_PASSWORD#$SERVER_NAME control="$CTL_PATH/$f.ctl" data="$f" rows=10000 direct=true errors=999
done
Without getting too philosophical, using assignments like FILES=$(ls *.txt) is a bad habit to get into. By contrast, for f in *.txt will deal correctly for files with odd characters in them (like spaces or other syntax breaking values). BUT the other habit you do want to get into is to quote all variable references (like $f), with dbl-quotes : "$f", OK? ;-) This is the otherside of protection for files with spaces etc embedded in them.
In the edit update, I've varibalized your CTL_PATH and CTL_FILE. I think I understand your intent, that you have 1 std CTL_FILE that you pass thru sed to create a table specific .ctl file (a good approach in my experience). Note that you don't need to use cat to send a file to sed, but your use to create a altered file via redirection (> $f.ctl) is very shell-like too.
In 2nd edit update, I looked here on S.O. and found an example sqlldr cmdline that has the correct syntax and have modified to work with your variable names.
To finish up,
A. Are you sure the Oracle Client package is installed on the machine
that you are running your script on?
B. Is the /path/to/oracle/client/tools/bin included in your working
$PATH?
C. try which sqlldr. If you don't get anything, either its not
installed or its not in the path.
D. If not installed, you'll have to get it installed.
E. Once installed, note the directory that contains the sqlldr cmd.
find / -name 'sqlldr*' will take a long time to run, but it will
print out the path you want to use.
F. Take the "path" part of what is returned (like
/opt/oracle/11.2/client/bin/ (but not the sqlldr at the end), and
edit script at 2nd line with
(Txt added to appease the S.O. Formatter ;-) )
export ORCL_PATH="/path/you/found/to/oracle/client"
export PATH="$ORCL_PATH:$PATH"
These steps should solve any remaining issues. If this doesn't work, see if there is someone where you work that understands your local computing environment that can help explain any missing or different steps.
IHTH

Resources