I need to extract a number from a message in kibana and found all value that are greater than 1, is there a way to get that number not in yellow?, my logs are these:
thanks a lot
Solved by using regex:
message: "Time for external call to Firebase:" AND /[1-9].[0-9]{1,3}/
Related
My goal is to change a buffered number with many leading zeros (like 00000000123) to only the number without the leading zeros (123).
Do you have any ideas how I could do that?
Thank you and best regards.
You should set the DataType to Numeric, while setting the buffer
setting numeric buffer
Buff1 in my case will have leading zeros, Buff2 will be without them
If the target machine has MS Excel installed, then create another buffer as following
Excel installed
Name: BufferedIntegerValue Value: {CALC[{B[BufferedNumber]}]}
Excel not installed
Name: BufferedIntegerValue Value: {MATH[{B[BufferedNumber]}]}
from TOSCA version 13.x manual: https://documentation.tricentis.com/tosca/1330/en/content/tbox/calculations.htm?Highlight=math%20function
I have a dataframe that contains some cells with error messages as string. The strings come in the following forms:
ERROR-100_Data not found for ID "xxx"
ERROR-100_Data not found for id "xxx"
ERROR-101_Data not found for SUBID "yyy"
Data not found for ID "xxx"
Data not found for id "xxx"
I need to extract the number of the error (if it has one) and the GENERAL description, avoiding the specificity of the ID or SUBID. I have a function where I use the following regex expression:
sub(".*?ERROR-(.*?)for ID.*","\\1",df[,col1],sep="-")
This works only for the first case. Is there a way to obtain the following results using only one expression?
100_Data not found
100_Data not found
101_Data not found
Data not found
Data not found
We can use:
tsxt <- 'ERROR-100_Data not found for ID "xxx"'
gsub("\\sfor.*|ERROR-","",tsxt, perl=TRUE)
[1] "101_Data not found"
Or as suggested by #Jan anchor ERROR to make it more general:
gsub("\\sfor.*|^ERROR-","",tsxt, perl=TRUE)
You could use
^ERROR-|\sfor.+
which needs to be replaced by an empty string, see a demo on regex101.com.
Use this regex:
.*?(?:ERROR-)?(.*?)\s+for\s+(?:[A-Z]*)?ID
This makes sure that ERROR- part is optional, then captures everything before for ...ID is encountered (case-insensitively). The only capturing group contains the desired text, which can then be used directly without needing any substitution.
The first and the third groups in this regex are non-capture groups, i.e., they'll match their content but not capture it for further usage, thus leaving us with only one capture group (the middle one). This is done since the OP isn't interested in the data they refer to. Making them as capture groups would have meant three results, and the post-processing would have involved hard-coding the usage of second group only (the middle one), without ever having to deal with the other two.
Demo
I hope this isn't a duplicate, I was unable to find a question that refers to the exact same issue.
I have a data frame in R, where within one column (let's call it 'Task') there are 170 items named EC1:EC170, I would like to replace them so that they just say 'EC' and don't have a number following them.
The important thing is that this column also has other types of values, that do not start with EC, so I don't just want to change the names of all values in the column, but only those that start with 'EC'.
In linux I would use 'sed' and replace 'EC*' with 'EC', but I don't know how to do that in R.
Rich Scriven's startswith suggestion worked great, I did write df$task instead of just 'task'. Thanks a lot! this is what I used: df$task[startsWith(df$task, "EC")] <- "EC"
I'd recommend regex as well. You're looking for string "EC" followed by 1 to 3 digits and replace these occurrences with "EC":
df$Task = sub("EC\\d{1,3}", "EC", df$Task)
I am currently writing a block of code on R which collects data via a SPARQL query. My problem is when I am trying to filter the query by date, R gives an error of "unexpected numeric constant".
There is no any mistake in the SPARQL code because when I run the exact code on the endpoint I receive data normally.
You will find the block of code where I have the problem. It does not matter the lines before and after, just the second line of the date filter.
...
OPTIONAL {?seller gr:legalName ?sellerLegalName} .
FILTER REGEX (STR(?date) >= "2015-01-01") .
FILTER NOT EXISTS {?spendingItem elod:hasCorrectedDecision ?correctedDecision} .
...
Please, I would kindly ask for your help! :)
For any further information that you want in order to solve the problem, feel free to contact with me.
Thank you all!!!
SOLVED!
I found that the date should be passed as timestamp!
Also, I found a useful site where you can convert any date in timestamp and vice versa.
I would like to thank you all for your responses and your useful help!
You should filter it as a date/time value rather than as a string - perhaps that will help:
FILTER (?date > "2015-01-01"^^xsd:date)
See this answer: SPARQL date range
When creating a wildcard filter in Tableau with connection to a Vertica DB, I get the following error:
Function lower may give a 128010-octet result; the limit is 65000 octets error.
It seems like a problem on Vertica side but 65000 is the max size of varchar data type in Vertica.
I checked that the wildcard filter works fine with a usual Excel spreadsheet.
Verify that the data is in UTF-8 format. There may be special characters in the string.
You can verify that all of the string-based data in the table is in UTF-8 format by using the ISUTF8 function.
I was facing the same problem, I only wanted the first few characters so this worked for me:
select upper(substring(columnname, 0, 10)) from tablename;
it is not a solution if you want the whole value, you need to figure something out, as the length of some value of the result might be exceeding the limit.