Not able to filter numeric data in message field of ELK log - kibana

I am trying to write a regex which helps me match following url in message field of log captured in ELK
Total -Time is GET~HTTP/1.1~/processname/health~127.0.0.1~END~Response-Code:200~Total-Time: 0ms
On this regex, I want to apply a filter on the numeric value '0' before ms but not able to. I want to capture for > 0ms.
Tried using Kibana QL and Lucene syntax but somehow it does not filter.

Related

Kibana: How to aggregate for all UUIDs

I am tracking my url hit counts and want to aggregate them.
I have a few URL as follows:
example.com/service/{uuid}
when I view in Kibana it lists out the total hit count of each URL individually so my table has something like:
example.com/homepage 100 count
example.com/service/uuid1 10 count
example.com/service/uuid2 5 count
Is there an easy way to combine all uuids into 1 entry?
I was thinking of replacing uuids with a static string, however the admins blocked regex support making the replacement very difficult. So I am trying to see if there is any other way before doing that.
Thanks!
I would suggest to create a new field with scripted fields.
The new field would return value: example.com/service/uuid if the url contains the word uuid. Otherwise it will return the url as it is.
Then you could do the aggregation on the new field.

How to create a chart in stack driver which will show the counts of API label from logs

I am very new to stack driver and trying to implement some charts which i have implemented in splunk for a different product.
We have the api name in the logs under textPaylod field and i want to extract the api name from the field and create a chart based on the counts of API names.
ex below is the sample log.
type: "k8s_container"
}
severity: "INFO"
textPayload: "19-04-29T04:30:51.058+0000 INFO PostFilter: POST response to http://<endpoint>/abc/def/users/getNames
"
timestamp: "2019-04-29T04:30:51.059143860Z"
}
type: "k8s_container"
}
severity: "INFO"
textPayload: "19-04-29T04:30:51.058+0000 INFO PostFilter: POST response to http://<endpoint>/abc/def/users/getPhoneNumbers
"
timestamp: "2019-04-29T04:30:51.059143860Z"
}
I've create a custom metric and extracted the text after "/abc/def" into API_NAME label expecting to use it as group function in the metric.
Crating Custom Metric
When i tried to explore the metric and see the counts in a stacked bar i am not able to find the counts by apiname
Metric Explorer
When asking for help debugging a specific issue you've encountered following existing instructions, you may get a better response by emailing google-stackdriver-discussion#googlegroups.com .
As outlined in Logs-based Metric Labels, you should specify the appropriate capture group to extract the value of the label.
You can then see the time series for the logs-based metric you've created (see https://cloud.google.com/monitoring/api/troubleshooting for how to query the raw data). It's likely that your regular expression is not matching exactly what you think it's matching, and you are always getting an empty value for the API_Name label. One suspect is the escaped \? in your pattern — according to the RE2 syntax, ? should not be escaped.

Using MessageAttributes in AWS SNS Post request

I am trying to use the MessageAttributes parameter in AWS SNS POST request. This is to customize the Sender ID (by AWS.SNS.SMS.SenderID). I am trying for Germany phone number and hence is allowed to customize the Sender ID. Can any one help me with the correct syntax?
Thanks,
Subhajit
You need to send 3 key/value pairs for each attribute:
MessageAttributes.entry.${index}.Name=${attributeName}&
MessageAttributes.entry.${index}.Value.DataType=String&
MessageAttributes.entry.${index}.Value.StringValue=${attributeValue}
${index} is the numerical index of each attribute, starting with 1
On the second line you need to specify the type of the value. Most common cases are String.
The third line is the actual value. You can see more information in the link above, I have only used strings and StringValue.
All the values need to be url-encoded for obvious reasons.
I was able to solve it using the following:
MessageAttributes.entry.N.Name (key)
MessageAttributes.entry.N.Value (value)

Is Kibana date format index pattern still supported?

I got latest Kibana 5.4.0 and Docs says:
https://www.elastic.co/guide/en/kibana/current/index-patterns.html#settings-create-pattern
To use an event time in an index name, enclose the static text in the pattern and specify the date format using the tokens described in the following table.
For example, [logstash-]YYYY.MM.DD matches all indices whose names have a timestamp of the form YYYY.MM.DD appended to the prefix logstash-, such as logstash-2015.01.31 and logstash-2015-02-01.
When I try to create pattern [testx_]YYYY-MM-DD_HH-mm or [testx_]YYYY-MM-DD_HH or [testx_]YYYY-MM-DD Kibana can't find #timstamp field and says that none of indexes match these patterns.
GET _cat/indices
yellow open testx_2017-06-19_14 dHAfSzAuSEKpYLuA8p5EIw 1 1 1 0 4.6kb 4.6k
yellow open testx_2017-06-19_13-59 hfGkELCsSUavaX8GuLPuMQ 1 1 1 0 4.6kb 4.6kb
yellow open testx_2017-06-19 lbsdW18cSIuZ2bNn1Fw7WA 1 1 1 0 4.6kb 4.6kb
On other hand for testx_* pattern Kibana finds #timestamp field and matches 100% of indexes...
Do latest Kibana support time based names for indexes?
I would like to gain performance benefits from index naming schema if it's still appropriate...
UPDATE
What is wrong:
Some warnings:
UPDATE 2 I found https://www.elastic.co/blog/managing-time-based-indices-efficiently which promote "Rollover Pattern". Maintaining date/time in index name is no longer a recommended way, but I doubt that new API makes life easier ((
According to these issues:
https://github.com/elastic/kibana/issues/5447 - Default Logstash index pattern should be "[logstash-]YYYY.MM.DD", not "logstash-*"
Kibana 4.3.0 should address this for you: it automatically optimizes wildcard index patterns such as logstash-* in the same way that you could previously only achieve by manually configuring a time-based index pattern name that matches your underlying indexing scheme (e.g. [logstash-]YYYY.MM.DD).
https://github.com/elastic/kibana/issues/4342 - Efficiently search against wildcard indices regardless of underlying indexing strategy
Elasticsearch 1.6 introduced the _field_stats API which will, for the first time, allow us to search for indices that contain fields within a given range. For example, we can search for indices that contain an #timestamp between X and Y.
This means that users will no longer be required to roll their indices at UTC midnight, nor use date patterns at all. They can effectively name indices whatever they want. and Kibana can automatically optimize requests by firing a pre-flight request for indices. We might need to add some caching here, but it should greatly enhance usability.
There is no need for time based names for performance but keeping time based index names still useful for archiving old indexes.
UPDATE Created issue to remove time based pattern from docs. https://github.com/elastic/kibana/issues/12406
Elasticsearch in previous version was allowing auto addition of fields like #timestamp.
https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking_50_mapping_changes.html
So indices don't contain time based events, or in other term no field having a datetime field.
I am dumping json logs directly to elasticsearch and adding a timestamp before adding to elasticsearch. So while creating index I select the timestamp field I have defined.

Google Analytics doesnt apply my filter

I created a filter on my account.
This filter is a custom filter, search and replace.
I use
"Request URI" for Filter Field,
\?.* for Search String
I also attached this filter to my specific view.
My problem is, if I go to the view->Reporting->Behavior->Site Content->All Pages, I see that the filter is not applied. I see pages such as "/xy.html?id=12345".
I would expect "/xy.html" only. Somewhere I've read that filters are not works for past data, but I did some test visits after I applied the filter and the urls wasn't changed :(
If I click on verify, I get this message: "This filter would not have changed your data. Either the filter configuration is incorrect, or the set of sampled data is too small."
Your filter definition should use regular expressions for search&replace.
Search String: (.)?(\?.)
Replace String: \1
This will search for two parts: 1. all symbols before the very first "?" 2. all symbols after the first "?" in your URI.
The replacement will use the first part as replacement (all symbols before the very first "?"
Make sure you google some regex basics.
Filters only apply the new data collected, never the historic data you already have in your properties collected.

Resources