graphite vs collectd. retentions not applied - graphite

I have installed collectd on BIND9 hosts and configured it to pass metrics to graphite. I want to be able to view metrics for at least 1 month but retentions in storage-schema.conf not applied for collectd section, and applied last default section that stores only last 24h metrics.
I understand that I need to set correct retentions regex to match metrics but it is seems to me that is correct already but it does not work :(
My storage-schemas.conf looks like this for now:
[carbon]
pattern = ^carbon\.
retentions = 60:90d
[mxservers]
pattern = ^mx-servers\.*
retentions = 60s:7d,5m:2y
[ns.servers]
pattern = ^ns\d\.collectd\..*
retentions = 60s:7d,10m:2y
[collectd]
pattern =^collectd\.*
retentions = 60s:7d,10m:2y
# *** Netapp Monitoring ***
[netapp.capacity]
pattern = ^netapp\.capacity\.*
retentions = 15m:100d, 1d:5y
[netapp.poller.capacity]
pattern = ^netapp\.poller\.capacity\.*
retentions = 15m:100d, 1d:5y
[netapp.perf]
pattern = ^netapp\.perf\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
[netapp.poller.perf]
pattern = ^netapp\.poller\.perf\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
[netapp.perf7]
pattern = ^netapp\.perf7\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
[netapp.poller.perf7]
pattern = ^netapp\.poller\.perf7\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
# *** Netapp Monitoring ***
[default_1min_for_1day]
pattern = .*
retentions = 60s:1d
I have a problem with section named - [ns.servers],
tried to set regex in it to:
^ns\d\.collectd\.
^ns\d\.collectd\..*
^ns.\.collectd\.
^ns.\.collectd\..*
None of them solves the problem.
The metrics are stored with names like this:
ns1.collectd.load.load.shortterm
ns2.collectd.load.load.longterm
ns1.collectd.interface-bond0.if_packets.rx
ns2.collectd.interface-bond0.if_packets.tx
Please help me to set correct regex to get it match.

Did you happen to configure the regexes in storage-schemas.conf after the .wsp files were created? If so you have to resize the existing .wsp files manually. You can do this with the whisper-resize utility included in the whisper package (or just delete the files and let carbon recreate them).
storage-schemas.conf is only applied when creating a new .wsp file.

I finally realized what the problem is! I take a look to /var/log/carbon/creates.log and found that my metrics for collecd is stored as ns2/collectd.* - "hostname"+"/"+"collectd" - because in configuration of collectd on DNS-servers there was an option Postfix "/collectd" that make this.
05/06/2017 09:54:22 :: creating database file /var/lib/graphite/whisper/ns2/collectd/cpu-1/cpu-idle.wsp (archive=[(60, 1440)] xff=None agg=None)
05/06/2017 09:54:22 :: new metric ns2/collectd.disk-sda.disk_octets.write matched schema default_1min_for_1day
05/06/2017 09:54:22 :: new metric ns2/collectd.disk-sda.disk_octets.write matched aggregation schema default
So the solution was to make regex look like this: pattern = ^ns[1-2]\/collectd\.* - escaped "/" - character
and then I deleted old metrics and new metric files was created with correct retentions!
05/06/2017 11:32:12 :: creating database file /var/lib/graphite/whisper/ns2/collectd/df-boot-efi/df_complex-free.wsp (archive=[(60, 10080), (600, 105120)] xff=None agg=None)
05/06/2017 11:32:12 :: new metric ns2/collectd.interface-eth0.if_errors.tx matched schema ns.servers
05/06/2017 11:32:12 :: new metric ns2/collectd.interface-eth0.if_errors.tx matched aggregation schema default

Related

How to exclude multiple file extension from diff target in JGit

I found a way to exclude a specified file extension from JGit diff in this way:
val excludePath = PathSuffixFilter.create(".designer.cs").negate()
val df = new DiffFormatter(DisabledOutputStream.INSTANCE)
df.setPathFilter(excludePath);
What should I do for multiple file extensions?
There is an OrTreeFilter and an AndTreeFilter to combine multiple TreeFilters.
To exclude multiple file endings, combine the single path filters with an AndTreeFilter and use this to configure the diff formatter:
val fooFilter = PathSuffixFilter.create(".foo").negate()
val barFilter = PathSuffixFilter.create(".bar").negate()
val treeFilter = AndTreeFilter.create(fooFilter, barFilter);
...
diffFormatter.setPathFilter(treeFilter);

Workaround for case-sensitive input to dir

I am using Octave 5.1.0 on Windows 10 (x64). I am parsing a series of directories looking for an Excel spreadsheet in each directory with "logbook" in its filename. The problem is these files are created by hand and the filenaming isn't consistent: sometimes it's "LogBook", other times it's "logbook", etc...
It looks like the string passed as input to the dir function is case-sensitive so if I don't have the correct case, dir returns an empty struct. Currently, I am using the following workaround, but I wondered if there was a better way of doing this (for a start I haven't captured all possible upper/lower case combinations):
logbook = dir('*LogBook.xls*');
if isempty(logbook)
logbook = dir('*logbook.xls*');
if isempty(logbook)
logbook = dir('*Logbook.xls*');
if isempty(logbook)
logbook = dir('*logBook.xls*');
if isempty(logbook)
error(['Could not find logbook spreadsheet in ' dir_name '.'])
end
end
end
end
You need to get the list of filenames (either via readdir, dir, ls), and then search for the string in that list. If you use readdir, it can be done like this:
[files, err, msg] = readdir ('.'); # read current directory
if (err != 0)
error ("failed to readdir (error code %d): %s", msg);
endif
logbook_indices = find (cellfun (#any, regexpi (files, 'logbook'));
logbook_filenames = files(logbook_indices);
A much less standard approach could be:
glob ('*[lL][oO][gG][bB][oO][kK]*')

XQuery (saxon) failing with a schema (XPath works)

I switched in saxon from XPath to XQuery and on the selects where I have a schema I'm getting the error message:
A typed input document can only be used with a schema-aware query
My setup is:
InputSource xmlSource = new InputSource(xmlData);
SAXSource saxSource = new SAXSource(reader, xmlSource);
Source schemaSource = new StreamSource(schemaFile);
Configuration config = createEnterpriseConfiguration();
config.addSchemaSource(schemaSource);
Processor processor = new Processor(config);
SchemaValidator validator = new SchemaValidatorImpl(processor);
DocumentBuilder doc_builder = processor.newDocumentBuilder();
if(!preserveWhiteSpace)
doc_builder.setWhitespaceStrippingPolicy(WhitespaceStrippingPolicy.ALL);
doc_builder.setSchemaValidator(validator);
XdmNode root_node = doc_builder.build(saxSource);
XQueryCompiler compiler = processor.newXQueryCompiler();
Is there something additional I need to do on queries where there is a schema?
thanks - dave
Call XQueryCompiler.setSchemaAware(true);
This isn't the default because it's good for the optimizer to know whether the data is likely to be typed or untyped, and it's inefficient to generate schema-aware code if the data is untyped (conversely, when the data is typed, schema-aware code is typically faster -- though the savings can be eaten up by the extra cost of validating the input).

How do I use regex to find FS.File on FS.Collection in meteor

How do I use regex to find FS.File on FS.Collection in meteor. My code is as follows and it is not working
partOfFileName = "*User_" + clickedResellerId + "_*";
var imgs = Images.find({fileName:{$regex:partOfFileName}});
//var imgs = Images.find();
return imgs // Where Images is an FS.Collection instance
In place of fileName I've also tried name and it is not working either. Please help
I don't think your regex is valid. Did you perhaps mean the following?
partOfFileName = ".*User_" + clickedResellerId + "_.*";
Please note that POSIX wildcard notation is different from regular expressions. in Regular expressions the * operators indicates repetition of the preceding operator (in my case a ., i.e., anything). A * by itself has no meaning, and it doesn't mean "anything" like in POSIX.

pyparsing multiple lines optional missing data in result set

I am quite new pyparsing user and have missing match i don't understand:
Here is the text i would like to parse:
polraw="""
set policy id 800 from "Untrust" to "Trust" "IP_10.124.10.6" "MIP(10.0.2.175)" "TCP_1002" permit
set policy id 800
set dst-address "MIP(10.0.2.188)"
set service "TCP_1002-1005"
set log session-init
exit
set policy id 724 from "Trust" to "Untrust" "IP_10.16.14.28" "IP_10.24.10.6" "TCP_1002" permit
set policy id 724
set src-address "IP_10.162.14.38"
set dst-address "IP_10.3.28.38"
set service "TCP_1002-1005"
set log session-init
exit
set policy id 233 name "THE NAME is 527 ;" from "Untrust" to "Trust" "IP_10.24.108.6" "MIP(10.0.2.149)" "TCP_1002" permit
set policy id 233
set service "TCP_1002-1005"
set service "TCP_1006-1008"
set service "TCP_1786"
set log session-init
exit
"""
I setup grammar this way:
KPOL = Suppress(Keyword('set policy id'))
NUM = Regex(r'\d+')
KSVC = Suppress(Keyword('set service'))
KSRC = Suppress(Keyword('set src-address'))
KDST = Suppress(Keyword('set dst-address'))
SVC = dblQuotedString.setParseAction(lambda t: t[0].replace('"',''))
ADDR = dblQuotedString.setParseAction(lambda t: t[0].replace('"',''))
EXIT = Suppress(Keyword('exit'))
EOL = LineEnd().suppress()
P_SVC = KSVC + SVC + EOL
P_SRC = KSRC + ADDR + EOL
P_DST = KDST + ADDR + EOL
x = KPOL + NUM('PId') + EOL + Optional(ZeroOrMore(P_SVC)) + Optional(ZeroOrMore(P_SRC)) + Optional(ZeroOrMore(P_DST))
for z in x.searchString(polraw):
print z
Result set is such as
['800', 'MIP(10.0.2.188)']
['724', 'IP_10.162.14.38', 'IP_10.3.28.38']
['233', 'TCP_1002-1005', 'TCP_1006-1008', 'TCP_1786']
The 800 is missing service tag ???
What's wrong here.
Thanks by advance
Laurent
The problem you are seeing is that in your expression, DST's are only looked for after having skipped over optional SVC's and SRC's. You have a couple of options, I'll go through each so you can get a sense of what all is going on here.
(But first, there is no point in writing "Optional(ZeroOrMore(anything))" - ZeroOrMore already implies Optional, so I'm going to drop the Optional part in any of these choices.)
If you are going to get SVC's, SRC's, and DST's in any order, you could refactor your ZeroOrMore to accept any of the three data types, like this:
x = KPOL + NUM('PId') + EOL + ZeroOrMore(P_SVC|P_SRC|P_DST)
This will allow you to intermix different types of statements, and they will all get collected as part of the ZeroOrMore repetition.
If you want to keep these different types of statements in groups, then you can add a results name to each:
x = KPOL + NUM('PId') + EOL + ZeroOrMore(P_SVC("svc*")|
P_SRC("src*")|
P_DST("dst*"))
Note the trailing '*' on each name - this is equivalent to calling setResultsName with the listAllMatches argument equal to True. As each different expression is matched, the results for the different types will get collected into the "svc", "src", or "dst" results name. Calling z.dump() will list the tokens and the results names and their values, so you can see how this works.
set policy id 233
set service "TCP_1002-1005"
set dst-address "IP_10.3.28.38"
set service "TCP_1006-1008"
set service "TCP_1786"
set log session-init
exit
shows this for z.dump():
['233', 'TCP_1002-1005', 'IP_10.3.28.38', 'TCP_1006-1008', 'TCP_1786']
- PId: 233
- dst: [['IP_10.3.28.38']]
- svc: [['TCP_1002-1005'], ['TCP_1006-1008'], ['TCP_1786']]
If you wrap ungroup on the P_xxx expressions, maybe like this:
P_SVC,P_SRC,P_DST = (ungroup(expr) for expr in (P_SVC,P_SRC,P_DST))
then the output is even cleaner-looking:
['233', 'TCP_1002-1005', 'IP_10.3.28.38', 'TCP_1006-1008', 'TCP_1786']
- PId: 233
- dst: ['IP_10.3.28.38']
- svc: ['TCP_1002-1005', 'TCP_1006-1008', 'TCP_1786']
This is actually looking pretty good, but let me pass on one other option. There are a number of cases where parsers have to look for several sub-expressions in any order. Let's say they are A,B,C, and D. To accept these in any order, you could write something like OneOrMore(A|B|C|D), but this would accept multiple A's, or A, B, and C, but not D. The exhaustive/exhausting combinatorial explosion of (A+B+C+D) | (A+B+D+C) | etc. could be written, or you could maybe automate it with something like
from itertools import permutations
mixNmatch = MatchFirst(And(p) for p in permutations((A,B,C,D),4))
But there is a class in pyparsing called Each that allows to write the same kind of thing:
Each([A,B,C,D])
meaning "must have one each of A, B, C, and D, in any order". And like And, Or, NotAny, etc., there is an operator shortcut too:
A & B & C & D
which means the same thing.
If you want "must have A, B, and C, and optionally D", then write:
A & B & C & Optional(D)
and this will parse with the same kind of behavior, looking for A, B, C, and D, regardless of the incoming order, and whether D is last or mixed in with A, B, and C. You can also use OneOrMore and ZeroOrMore to indicate optional repetition of any of the expressions.
So you could write your expression as:
x = KPOL + NUM('PId') + EOL + (ZeroOrMore(P_SVC) &
ZeroOrMore(P_SRC) &
ZeroOrMore(P_DST))
I looked at using results names with this expression, and the ZeroOrMore's seem to be confusing things, maybe still a bug in how this is done. So you may have to reserve using Each for more basic cases like my A,B,C,D example. But I wanted to make you aware of it.
Some other notes on your parser:
dblQuotedString.setParseAction(lambda t: t[0].replace('"','')) is probably better written
dblQuotedString.setParseAction(removeQuotes). You don't have any embedded quotes in your examples, but it's good to be aware of where your assumptions might not translate to a future application. Here are a couple of ways of removing the defining quotes:
dblQuotedString.setParseAction(lambda t: t[0].replace('"',''))
print dblQuotedString.parseString(r'"This is an embedded quote \" and an ending quote \""')[0]
# prints 'This is an embedded quote \ and an ending quote \'
# removed leading and trailing "s, but also internal ones too, which are
# really part of the quoted string
dblQuotedString.setParseAction(lambda t: t[0].strip('"'))
print dblQuotedString.parseString(r'"This is an embedded quote \" and an ending quote \""')[0]
# prints 'This is an embedded quote \" and an ending quote \'
# removed leading and trailing "s, and leaves the one internal ones but strips off
# the escaped ending quote
dblQuotedString.setParseAction(removeQuotes)
print dblQuotedString.parseString(r'"This is an embedded quote \" and an ending quote \""')[0]
# prints 'This is an embedded quote \" and an ending quote \"'
# just removes leading and trailing " characters, leaves escaped "s in place
KPOL = Suppress(Keyword('set policy id')) is a bit fragile, as it will break if there are any extra spaces between 'set' and 'policy', or between 'policy' and 'id'. I usually define these kind of expressions by first defining all the keywords individually:
SET,POLICY,ID,SERVICE,SRC_ADDRESS,DST_ADDRESS,EXIT = map(Keyword,
"set policy id service src-address dst-address exit".split())
and then define the separate expressions using:
KSVC = Suppress(SET + SERVICE)
KSRC = Suppress(SET + SRC_ADDRESS)
KDST = Suppress(SET + DST_ADDRESS)
Now your parser will cleanly handle extra whitespace (or even comments!) between individual keywords in your expressions.

Resources