My logs are not ordered by date :
1. 2013-09-13T09:44:10.581-0400 - 4mainthreadtest#test.com - (v1.6.88) - REPLAY >> Scheduling replay in 2 seconds
2. 2013-09-13T09:44:10.546-0400 - 4mainthreadtest#test.com - (v1.6.88) - REPLAY >> Delay of 106803.116188 seconds
3. 2013-09-13T09:44:10.581-0400 - 4mainthreadtest#test.com - (v1.6.88) - REPLAY >> Hexoskin - replay completed 2013-09-13T09:44:10.535-0400 - 4mainthreadtest#test.com - (v1.6.88) - Hexoskin SDK - Playback ended with 0x0000
How can I order them ?
I looked at the command line sort but I was not able to sort by date iso8601.
You want to sort by the second field (2013-09-13...), not the whole line. You can specify that using the -k parameter:
sort -k 2 log.txt
Related
I have created a unix script to be executed after the session finished.
The script basically counts the lines of specific file and then creates a trailer with this specific structure:
T000014800000000000000000000000000000
T - for trailer
0000148 - number of lines
00000000000000000000000000000 - filler
I have tested the script in Mac, I know already that environments are totally different, but I want to know what is needed to be changed in order to execute this script successfully in IPC.
After execution I get the following error message:
The shell command failed with exit code 126.
I invoke the script as follows:
sh -c "$PMRootDir/scripts/exec_trailer_unix.sh $PMRootDir/TgtFiles"
#! /bin/sh
TgtFiles=$1
TgtFilesBody=$TgtFiles/body.txt
TgtFilesTrailer=$TgtFiles/trailer.txt
string1=$(sed -n '$=' $TgtFilesBody)
pad=$(printf '%0.1s' "0"{1..8})
padlength=8
string2='T'
string3=$(printf '%s%*.*s%s\n' "$string2" 0 $((padlength - ${#string1} - ${#string2} )) "$pad" "$string1")
string4='00000000000000000000000000000'
string5=$(printf '%s%*.*s%s\n' "$string3" 0 $((${#string3} - ${#string4} )) "$string4")
echo $string5 > $TgtFilesTrailer
Any idea would be great.
Thanks in advance.
Please check below points.
it looks like permission issue. Please login using informatica user(the user that runs infa demon) and run this command. You should be able to get the errors.
sh -c "$PMRootDir/scripts/exec_trailer_unix.sh $PMRootDir/TgtFiles"
Sometime the server variable $PMRootDir in UNIX doesnt get interpreted and can result null value. Please use echo $PMRootDir to check if its working after logging into UNIX using above user.
You can create trailer file using Infa easily.
Just add an aggregator transformation right before actual target( group by a dummy field to calculate count(*)). Then add an expression transformation to create those strings. And then trailer file target. Just 3 more transformations.
| --> AGG --> EXP --> Trailer Target file
Final Tr --|--> Final Target
I am going to run load test using JMeter over Amazon AWS and I need to know before starting my test how much traffic is it going to generate over network.
The criteria that Amazon has in their policy is:
sustains, in aggregate, for more than 1 minute, over 1 Gbps (1 billion bits per second) or 1 Gpps (1 billion packets per second). If my test is going to exceed this criteria we need to submit a form before starting the test.
so how can I know if the test is going to exceed this number or not?
Run your test with 1 virtual user and 1 iteration in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.csv
To get an approximate figure open Open the result.csv file using Aggregate Report listener and there you will have 2 columns: Received KB/sec and Sent KB/sec. Multiply it by the duration of your test in seconds and you will get the number you're looking for.
alternatively you can open the result.csv file using MS Excel or LibreOffice Calc or equivalent where you can sum bytes and sentBytes columns and get the traffic with 1 byte precision:
I have the requirement to run dummy jobs for 30 minutes and 60 minutes respectively.
I have tried with --delay 30 in command line jobs, but I did not get the expected delay.
Designating a job as type ‘dummy’ will bypass anything contained within the command line field.
You have two options to create a 30/60minute timer job.
Option a:
Make the job a command line type job and put sleep 1800 or sleep 3600 in the command line field.
Option b:
Make the job a dummy type job and put sleep 1800 or sleep 3600 in either the pre-execution or post-execution fields.
By default the sleep command operates on seconds. For windows you may want to look into using the power shell version which would be powershell.exe -command start-sleep 1800
Use _sleep over sleep instead
Another way to enable a waiting time, either before or after an OS-type Job is by using the pre-execution or post-execution command options, as appropriate.
The use of _sleep is more convenient because it is operating system independent and is provided by the Control-M/Agent, which means that you do not require an extra deployment for that functionality.
Our monitoring system dumps metrics into Graphite does so once per minute, and has a retention of 1min:2d,5min:20d,30min:120d,6h:2y. However I've recently added monitors that run on a 5-minute period, and I've found that:
The 1 minute points are four zeroes and an actual value, repeating of course.
The 5+ minute points are all zeroes, likely because my xFilesFactor is higher than 0.2 and the aggregation just doesn't happen at all.
What I'd like to do is simply create a new Whisper file with the new retentions, [and no wasted space] and then import/re-aggregate the data into it. From what I've found whiper-resize.py is supposed to be the right tool.
As a test I've been doing:
whisper-resize.py \
--newfile=/tmp/foo.wsp \
--aggregate --aggregationMethod=max \
--xFilesFactor=0.1 \
--force \
quotas/us-central1CPUS/CPUS.wsp \
5min:20d 30min:120d 6h:2y
But after this operation completes foo-wsp is just filled with zeroes.
What's the deal?
You just need to change xFilesFactor for target files, like
whisper-resize.py --xFilesFactor=0.0 --nobackup quotas/us-central1CPUS/CPUS.wsp 1min:2d 5min:20d 30min:120d 6h:2y
You will not waste space - whisper format has fixed file size anyway. Please see details in http://obfuscurity.com/2012/04/Unhelpful-Graphite-Tip-9
Is it possible to graph the query resolution time of bind9 in munin?
I know there is a way to graph it in a unbound server, is it already done in bind? If not how do I start writing a munin plugin for that? I'm getting stats from http://127.0.0.1:8053/ in the bind9 server.
I don't believe that "query time" is a function of BIND. About the only time that I see that value (with individual lookups) is when using dig. If you're willing to use that, the following might be a good starting point:
#!/bin/sh
case $1 in
config)
cat <<'EOM'
graph_title Red Hat Query Time
graph_vlabel time
time.label msec
EOM
exit 0;;
esac
echo -n "time.value "
dig www.redhat.com|grep Query|cut -d':' -f2|cut -d\ -f2
Note that there's two spaces after the "-d\" in the second cut statement. If you save the above as "querytime" and run it at the command line, output should look something like:
root#pi1:~# ./querytime
time.value 189
root#pi1:~# ./querytime config
graph_title Red Hat Query Time
graph_vlabel time
time.label msec
I'm not sure of the value in tracking the above though. The response time can be affected: if the query is an initial lookup, if the answer is cached locally, depending on server load, depending on intervening network congestion, etc.
Note: the above may be a bit buggy as I've written it on the fly, but it should give you a good starting point. That it returned the above output is a good sign.
In any case, recommend reading the following before you write your own: http://munin-monitoring.org/wiki/HowToWritePlugins