I have got a column in my kusto table which contains some log messages of the following form
msg1
2020-10-29T12:57:08+00:00 dc1-k30-asw05.nw.domain.com cron: :- addNeighbor: Created neighbor ac:1f:6b:8b:09:99 on Vlan100
msg2
2020-10-29T15:55:20+00:00 dc1-k12-asw30.domain.com cron: :- validatePortSpeed: Unable to validate speed for port 100000000005c. Not supported by platform
Now I want to extract some sub strings from it into separate columns, the sub strings I am interested in are the timestamp value (2020-11-02T10:31:21+00:00) this is basically the start of the line, the programwhich emitted the log which is cron in this case and lastly the actual log message. So I am using the parse operator to do that instead of using multiple extract and evaluating the pattern multiple times.
parse kind = regex message with #".*?" syslogTime:string #"\s+.*domain.com .*?" program:string #": " msg
This query results in following outputs
"syslogTime": 2020-10-29T12:57:08+00:00,
"program": cron: :- addNeighbor,
"msg": Created neighbor ac:1f:6b:8b:09:99 on Vlan100,
"syslogTime": 2020-10-29T15:55:20+00:00,
"program": cron: :- validatePortSpeed,
"msg": Unable to validate speed for port 100000000005c. Not supported by platform,
As seen above the first field is parsed correctly however for the second column program the value matched in incorrect, the regex engine is doing a greedy match till the second : even though I have used a non-greedy quantifier #"\s+.*domain.com .*?".
I have also tried using the ungreedy flag U as to the parse query but with that the last part of the message is not being captured.
parse kind = regex flags=U message with #".*" syslogTime:string #"\s+.*domain.com.*" program:string #": " msg
The output I get
"syslogTime": 2020-11-02T08:47:35+00:00,
"program": cron,
"msg":
"syslogTime": 2020-10-29T15:53:36+00:00,
"program": cron,
"msg":
For the second column i.e. program I want to match till the first ":". I have tried multiple variations of the above queries but none of them yielded the expected output. So I am not sure what am I missing here.
I was able to solve this by modifying the parse expression to following.
parse kind = regex flags=U message with #".*" syslogTime:string #"\s+.*domain.com.*" program:string #": " msg #"$"
Related
g.V(VertexId).valueMap("tags") returns
{'tags': ['My Last Day', 'Poor Connection > Netenter code herework Issue', 'Or Hello > Equals Hi > Last Message ', 'Network Issue', 'Last Message ']}
g.V(VertexId).valueMap("tags").by(unfold()) returns
{'tags': 'My Last Day'}
Expecting like this:
{'tags': "My Last Day, Poor Connection > Netenter code herework Issue, Or Hello > Equals Hi > Last Message, Network Issue, Last Message"}
If I understand correctly, you want to return a set of multi-properties for "tags" as a single string. So the question really is how to concatenate each multi-property together to a single string in Gremlin.
The answer at this time is that you can't. There are no string concatenation options in Gremlin at this time, unless you can use a lambda (which won't work for Neptune and are generally discouraged for a number of reasons anyway). You will need to return all the multi-properties and then concatenate them in your application. It is possible that that we will see string concatenation functions for TinkerPop 3.7.0.
I'm writing a test case in robot framework. I'm getting the response in below json string:
{"responseTimeStamp":"1970-01-01T05:30:00",
"statusCode":"200",
"statusMsg":"200",
"_object":{"id":"TS82",
"name":"newgroup",
"desc":"ttesteste",
"parentGroups":[],
"childGroups":[],
"devices":null,
"mos":null,
"groupConfigRules" {
"version":null,
"ruleContents":null
},
"applications":null,"type":0
}
}
From that I want to take "_object" using:
${reqresstr} = ${response['_object']}
... but am getting the error "No keyword with name '=' found" error
If I try the following:
${reqresstr}= ${response['_object']}
... I'm getting the error "Keyword name cannot be empty." I tried removing the '=' but still get the same error.
How can I extract '_object' from that json string?
When using the "=" for variable assignment with the space-separated format, you must make sure you have no more than a single space before the "=". Your first example shows that you've got more than one space on either side of the "=". You must have only a single space before the = and two or more after, or robot will think the spaces are a separator between a keyword and argument.
For the "keyword must not be empty" error, the first cell after a variable name must be a keyword. Unlike traditional programming languages, you cannot directly assign a string to a variable.
To set a variable to a string you need to use the Set Variable keyword (or one of the variations such as Set Test Variable). For example:
${reqresstr}= Set variable ${response['_object']}
${reqresstr}= '${response["_object"]}'
wrap it inside quotes and two spaces after =
There is a syntax error in your command. Make sure there is a space between ${reqresstr} and =.
Using your example above:
${reqresstr} = ${response['_object']}
try to upload CSV data with a case statement in the query, but the following error appears:
cypher:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///test.csv' as line
MATCH(a:test_t{tid:line.pid})
CASE
WHEN line.key !='NA' THEN
WITH split(line.key,",") as name
UNWIND name as x
MERGE(k:test_key{key_term:toLower(x)})
MERGE(a)-[:contains]->(k)
END
Error
Neo.ClientError.Statement.SyntaxError: Invalid input 'S': expected 'l/L' (line 5, column 3 (offset: 137))
"CASE"
Can anyone help me?
The CASE clause does not support embedding other Cypher clauses (but it can invoke functions). In fact, a CASE clause is not actually needed for your use case.
This query should work (the :auto at the beginning is needed in neo4j 4.0+):
:auto USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///test.csv' as line FIELDTERMINATOR ';'
WITH line
WHERE line.key <> 'NA'
MATCH (a:test_t {tid: line.pid})
UNWIND split(line.key, ',') as x
MERGE (k:test_key {key_term: TOLOWER(x)})
MERGE (a)-[:contains]->(k)
This query filters out all unwanted lines as soon as they are obtained from the file. Reducing the number of rows of data being worked on as early as possible is good practice.
Also, you have a second issue. Your data file cannot use the comma as both the (default) field terminator AND as the delimiter between your x values.
To resolve this ambiguity, the above query chose to use the FIELDTERMINATOR ';' option to specify that the ";" character will be used as the field terminator. A sample data file would look like this:
pid;key
123;NA
234;Foo,Bar
345;Bar,Baz
456;NA
567;Baz
You are using the CASE incorrectly. You cannot have update clauses inside of a CASE statement. Instead you can use a WHERE clause to filter the rows of the file. For instance, adding WHERE line.key != 'NA' while processing the file befor you move onto the update will work. Something like this should fit the bill.
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///test.csv' as line
MATCH (a:test_t {tid: line.pid})
WITH line
WHERE line.key <> 'NA'
WITH split(line.key, ",") as name
UNWIND name as x
MERGE (k:test_key {key_term: toLower(x)})
MERGE (a)-[:contains]->(k)
It looks like,from your logic you could even move the test up above the MATCH. So this might be better (fewer unecessary matches).
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///test.csv' as line
WITH line
WHERE line.key <> 'NA'
MATCH (a:test_t {tid: line.pid})
WITH split(line.key, ",") as name
UNWIND name as x
MERGE (k:test_key {key_term: toLower(x)})
MERGE (a)-[:contains]->(k)
My requirement is to check whether the input is list or not using robot framework.
I tried using type(${temp}).name with evaluate function, which works in case of list and fails for string type.
Below is the error message -
Evaluating expression 'type(testdata).name' failed: SyntaxError: invalid token (, line 1)
Tried to use regex, but no luck.:(
Code :-
testRegEx
${match} Run Keyword Should Match Regexp ["swerwv","sfsdfdsf","edsfdf"] \[\s\S\]
Log To Console ${match}
output:-
FAIL : '["swerwv","sfsdfdsf","edsfdf"]' does not match '[\s\S]'
I'm new to robotframework. Any help would be appreciated.
My requirement is to check whether the input is list or not using robot framework.
You can use robot's special syntax for evaluate and the various keywords which accept expressions, where you omit the brackets in the variable reference to pass the actual variable to the expression (versus passing the value of the variable).
Example:
*** Variables ***
#{a_list} one two three
*** Test Cases ***
Test that variable is a list
run keyword unless type(a_list) == list
... Fail not a list
This feature is mentioned in the Evaluating expressions section of the built-in library.
Situation.. I have two tags defined, then I try to output them to the console. What comes out seems to be similar to an array, but I'd like to remove the formatting and just have the actual words outputted.
Here's what I currently have:
[Tags] ready ver10
Log To Console \n#{TEST TAGS}
And the result is
['ready', 'ver10']
So, how would I chuck the [', the ', ' and the '], thus only retaining the words ready and ver10?
Note: I was getting [u'ready', u'ver10'] - but once I got some advice to make sure I was running Python3 RobotFramework - after uninstalling robotframework via pip, and now only having robotframework installed via pip3, the u has vanished. That's great!
There are several ways to do it. For example, you could use a loop, or you could convert the list to a string before calling log to console
Using a loop.
Since the data is a list, it's easy to iterate over the list:
FOR ${tag} IN #{Test Tags}
log to console ${tag}
END
Converting to a string
You can use the evaluate keyword to convert the list to a string of values separated by a newline. Note: you have to use two backslashes in the call to evaluate since both robot and python use the backslash as an escape character. So, the first backslash escapes the second so that python will see \n and convert it to a newline.
${tags}= evaluate "\\n".join($test_tags)
log to console \n${tags}