I got latest Kibana 5.4.0 and Docs says:
https://www.elastic.co/guide/en/kibana/current/index-patterns.html#settings-create-pattern
To use an event time in an index name, enclose the static text in the pattern and specify the date format using the tokens described in the following table.
For example, [logstash-]YYYY.MM.DD matches all indices whose names have a timestamp of the form YYYY.MM.DD appended to the prefix logstash-, such as logstash-2015.01.31 and logstash-2015-02-01.
When I try to create pattern [testx_]YYYY-MM-DD_HH-mm or [testx_]YYYY-MM-DD_HH or [testx_]YYYY-MM-DD Kibana can't find #timstamp field and says that none of indexes match these patterns.
GET _cat/indices
yellow open testx_2017-06-19_14 dHAfSzAuSEKpYLuA8p5EIw 1 1 1 0 4.6kb 4.6k
yellow open testx_2017-06-19_13-59 hfGkELCsSUavaX8GuLPuMQ 1 1 1 0 4.6kb 4.6kb
yellow open testx_2017-06-19 lbsdW18cSIuZ2bNn1Fw7WA 1 1 1 0 4.6kb 4.6kb
On other hand for testx_* pattern Kibana finds #timestamp field and matches 100% of indexes...
Do latest Kibana support time based names for indexes?
I would like to gain performance benefits from index naming schema if it's still appropriate...
UPDATE
What is wrong:
Some warnings:
UPDATE 2 I found https://www.elastic.co/blog/managing-time-based-indices-efficiently which promote "Rollover Pattern". Maintaining date/time in index name is no longer a recommended way, but I doubt that new API makes life easier ((
According to these issues:
https://github.com/elastic/kibana/issues/5447 - Default Logstash index pattern should be "[logstash-]YYYY.MM.DD", not "logstash-*"
Kibana 4.3.0 should address this for you: it automatically optimizes wildcard index patterns such as logstash-* in the same way that you could previously only achieve by manually configuring a time-based index pattern name that matches your underlying indexing scheme (e.g. [logstash-]YYYY.MM.DD).
https://github.com/elastic/kibana/issues/4342 - Efficiently search against wildcard indices regardless of underlying indexing strategy
Elasticsearch 1.6 introduced the _field_stats API which will, for the first time, allow us to search for indices that contain fields within a given range. For example, we can search for indices that contain an #timestamp between X and Y.
This means that users will no longer be required to roll their indices at UTC midnight, nor use date patterns at all. They can effectively name indices whatever they want. and Kibana can automatically optimize requests by firing a pre-flight request for indices. We might need to add some caching here, but it should greatly enhance usability.
There is no need for time based names for performance but keeping time based index names still useful for archiving old indexes.
UPDATE Created issue to remove time based pattern from docs. https://github.com/elastic/kibana/issues/12406
Elasticsearch in previous version was allowing auto addition of fields like #timestamp.
https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking_50_mapping_changes.html
So indices don't contain time based events, or in other term no field having a datetime field.
I am dumping json logs directly to elasticsearch and adding a timestamp before adding to elasticsearch. So while creating index I select the timestamp field I have defined.
Related
I am tracking my url hit counts and want to aggregate them.
I have a few URL as follows:
example.com/service/{uuid}
when I view in Kibana it lists out the total hit count of each URL individually so my table has something like:
example.com/homepage 100 count
example.com/service/uuid1 10 count
example.com/service/uuid2 5 count
Is there an easy way to combine all uuids into 1 entry?
I was thinking of replacing uuids with a static string, however the admins blocked regex support making the replacement very difficult. So I am trying to see if there is any other way before doing that.
Thanks!
I would suggest to create a new field with scripted fields.
The new field would return value: example.com/service/uuid if the url contains the word uuid. Otherwise it will return the url as it is.
Then you could do the aggregation on the new field.
Lets say I have a checklist collection where each item is it's own document since it contains a lot of other data.
I want the user to be able to drag and drop to reorder this list an save it that way. My initial thought was to have a field that is changed to reflect this order but moving one object requires changing the value on every document that is after the new location.
Is there a way to achieve this without a massive number of writes?
If the order of documents changes frequently, you can avoid writing the contents of each document by instead using a whole different document to maintain the order, using an array of strings containing the document IDs. In fact, you could hold lots of different orderings depending on how you want to display the documents.
Say you have a collection of documents:
collection
- docA
- docB
- docC
Now you want to store mutable orderings in a document called "order" in another collection:
collection-meta
- order
- byAlpha: ["docA", "docB", "docC"]
- byScore: ["docC", "docA", "docB"]
Just query the "order" document first, then get each document for display in the order defined in the array. To reorder the documents, just update the contents of the single array in the "order" doc.
I usually do this by using a floating point value for the order.
Say you have a list with these 3 documents:
ID=a, order=1.0
ID=b, order=2.0
ID=c, order=3.0
Now let's assume we want to move document a between b and c. You'd do that by changing its order to 2.0 + (3.0 - 2.0) / 2 = 2.5.
ID=b, order=2.0
ID=a, order=2.5
ID=c, order=3.0
This works for a reasonable number of swaps, which is the scenario I usually deal with.
If you're dealing with a large number/potentially infinite of iterations, you'll want to look at the precision of the floating point operation. In that case your alternative might be to use a custom value type, i.e. encoding the value into a string field and then use a custom library to do the division at a higher numeric precision.
If i understood your question correctly, what you can do is You can have a separate document that contains index of all the other checklist document. All the checklist documents can keep their respective info as it is supposed to be. For example
Checklist1 Checklist3
Checklist2 Checklist2
Checklist3 Checklist1
Index Index
(Before) (After Drag)
and Index contains structure like below after the drag is performed:
Index -> field value
Checklist1 3
Checklist2 2
Checklist3 1
I would like to filter certain sources and mediums (specifically email clients). I need to keep the dimension as one column (I use the maximum number of dimensions - 7).
The filter works fine when I have only one sourceMedium such as:
ga:sourceMedium!=amail.centrum.cz / referral
Filter doesn’t work at all when I use two sourceMedium:
ga:sourceMedium!=amail.centrum.cz / referral,ga:sourceMedium!=mail.google.com / referral
It doesn’t matter If I use AND / OR, the query doesn’t output the desired data.
I assume that there supposed to be some delimiter which could identify that amail.centrum.cz is one string which is delimited by another one. I already tried to use ' at the beginning and at the end of the string, but it seems that it doesn’t work.
Is there anything that I missed in docs or anything else? Looking for your help :)
BTW: I'm aware of the solution: Pull out data from GA, filter data manually (compare output data vs my list of email clients what I would like to exclude)
I would like to define a field, where there is a list of allowed values as well as give user the option to type it in. For example, I list a bunch of previous jobs that the applicant can have, plus have them pick other and fill it in as well.
Is it possible to do this with one field or do I need two fields where the user has to type it in? Is there a doc. or sample or tutorial I can look up? Thanks.
Here is a super simple Tags sample:
https://drive.google.com/open?id=0BxtQI4fTAVQqcUx4OUJfQ1JYV2c
To cover your exact use case you just need to:
Add logic to check if record already exists
1.1 If record doesn't exist, then create one
Create relation between records
If you don't care about duplicates in your database, then you can skip step 1 and always do 1.1 and 2.
I am using a lookup table to successfully apply different UA-ID codes to the same universal analytics tag. However, for one particular UA-ID, I need to send data from all pages with the exception of data from a particular sub domain. i.e. something like this:
input variable : *.example.com except abc.example.com
Not sure how to implement this logic for the input variable of a lookup table.
Also, if I specify "example.com" as an input variable, does it capture all subdomains?
Edit in Jan 2018 for latest info.
For lookup tables you need to know 2 things:
Lookup table input variables do a hard match. I.e. its simply an 'equals' only, no 'contains' or 'starts with' or regex etc.
Lookup tables are sequential, the matching starts from the top and stops as soon as a match is found. Much like an if then elseif (without an 'else' available at the end!)
You can apply a default value if none of the rows in the table match.
There are now Regex Tables available as well which will enable you to do partial matches on values and return a value based on that. For full and comprehensive details, read the article by Simo
In your case you have 3 options:
Use a Regex Table Lookup
list each and every hostname (inc subdomain) you want to match and apply the correct UA number to each. You should end up with as many lines as you do sub-domains.
Create a new custom javascript variable which inspects the current host (inc subdomain) and returns whether its 'abc.example.com' or '.example.com' (indicating any other sub domain) and then you'll just need a couple of lines in your lookup table.