How to fix a problem with dynamic date templates? - datetime

I have a problem with dynamic date tampletes
I'm using ElasticSearch 6.2.4
My steps:
1) Create index with next settings:
PUT /test1
{
"settings": {
"index":{
"number_of_shards" : 9,
"number_of_replicas" : 0,
"max_rescore_window" : 2000000000,
"max_result_window" : 2000000000
}
},
"mappings": {
"files": {
"properties": {
"Дата добавления в БД": {
"type": "date"
}
},
"numeric_detection": true,
"dynamic_templates": [
{
"integers": {
"match_mapping_type": "long",
"mapping": {
"type": "long"
}
}
},
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
},
{
"dates": {
"match_mapping_type": "date",
"mapping": {
"format": "yyyy-MM-dd HH:mm:ss||yyyy/MM/dd HH:mm:ss||yyyyMMdd_HH:mm:ss",
"type": "date"
}
}
}
]
}
}
}
2) Try to put new records (I have only one)
POST /test1/files/_bulk
{"create":{"_index":"test1","_type":"files","_id":"0"}}
{"Дата добавления в БД":"2019/04/12 11:42:21"}
3) So, I have next output:
{
"took": 1,
"errors": true,
"items": [
{
"create": {
"_index": "test1",
"_type": "files",
"_id": "0",
"status": 400,
"error": {
"type": "mapper_parsing_exception",
"reason": "failed to parse [Дата добавления в БД]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Invalid format: \"2019/04/12 11:42:21\" is malformed at \"/04/12 11:42:21\""
}
}
}
}
]
}
I can't understand where is my mistake??
I tried to find some information about this problem in Google, unfortunately, I have no solves of this problem. Maybe, this question is so stupid, but I've already broken my brain.
Please, help me...

I can't fully understand, but this option work:
{
"settings": {
"index":{
"number_of_shards" : 9,
"number_of_replicas" : 0,
"max_rescore_window" : 2000000000,
"max_result_window" : 2000000000
}
},
"mappings": {
"files": {
"dynamic_date_formats": ["yyyy-MM-dd HH:mm:ss","yyyy/MM/dd HH:mm:ss", "yyyyMMdd_HH:mm:ss"],
"numeric_detection": true,
"date_detection": true,
"dynamic_templates": [
{
"integers": {
"match_mapping_type": "long",
"mapping": {
"type": "long"
}
}
},
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
}
]
}
}
}
Link to documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/6.2/dynamic-field-mapping.html
Thanks for attention :)

Related

Firestore Pagination: how to define 'startAt'-cursor for REST?

I am trying to use a cursor and 'startAt', to paginate REST requests to Firestore. According to the Paginate-documentation, the cursor should equal to the last document of the previous query. As the REST-documentation is without an example, I tried to run it by inserting an entire document as cursor in the startAt-object; like this:
POST https://firestore.googleapis.com/v1/PROJECT-NAME/databases/(default)/documents/organizations/testManyInstructions:runQuery
{
"structuredQuery": {
"from": [
{
"collectionId": "instructions"
}
],
"where": {
"fieldFilter": {
"field": {
"fieldPath": "belongsToDepartementID"
},
"op": "EQUAL",
"value": {
"stringValue": "toplevel-document-id"
}
}
},
"orderBy": [
{
"field": {
"fieldPath": "instructionNumber"
},
"direction": "ASCENDING"
}
],
"startAt": {
"values": [{
"document": {
"name": "projects/PROJECT-NAME/databases/(default)/documents/organizations/testManyInstructions/instructions/i0",
"fields": {
"checkbox": {
"booleanValue": false
},
"retrainTimespanDays": {
"integerValue": "365000"
},
"approvedByName": {
"stringValue": ""
},
"instructionNumber": {
"stringValue": "instr. 0"
},
"instructionCurrentRevision": {
"stringValue": "A"
},
"instructionCurrentRevisionPublishingDate": {
"timestampValue": "1999-01-01T00:00:00Z"
},
"instructionFileURL": {
"stringValue": ""
},
"instructionTitle": {
"stringValue": "dummy Title0"
},
"instructionFileUploadDate": {
"timestampValue": "1999-01-01T00:00:00Z"
},
"belongsToDepartementID": {
"stringValue": "toplevel-document-id"
},
"approvedByEmailAdress": {
"stringValue": ""
}
},
"createTime": "2022-02-18T13:55:42.807103Z",
"updateTime": "2022-02-18T13:55:42.807103Z"
}
}
]
},
"limit": 5
}
}
without the "startAt"-Object, the following code works fine and returns 5 documents.
with the "startAt"-Object, this error is returned:
{
"error": {
"code": 400,
"message": "Invalid JSON payload received. Unknown name \"document\" at 'structured_query.start_at.values[0]': Cannot find field.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "structured_query.start_at.values[0]",
"description": "Invalid JSON payload received. Unknown name \"document\" at 'structured_query.start_at.values[0]': Cannot find field."
}
]
}
]
}
}
]
Please advise, how to set the cursor in the startAt-object correctly.
I've run a similar query using offset instead of startAt, so I tried modifying and got it to work. This is the rest api documentation I used.
startAt requires a Cursor object which is an array of Values.
https://firebase.google.com/docs/firestore/reference/rest/v1/StructuredQuery
https://firebase.google.com/docs/firestore/reference/rest/v1/Cursor
https://firebase.google.com/docs/firestore/reference/rest/Shared.Types/ArrayValue#Value
I would have preferred an example as well!
"startAt": {
"values": [{
"stringValue": "Cr"
}]
},
"orderBy": [{
"field": {
"fieldPath": "Summary"
}
}],
Good luck!

Elasticsearch & Elasticpress search by math_phrase with & inside query

I have problem with my queries when I'm using " or ' - then I expect match_phrase, but I don't know how I can retrieve posts when I'm using match_phrase with &
For example I'm using Something & Something as phrase, and when I'm didn't using ' and " I can see posts with Something & Something but there I'm using multi_match.
Something what I've tried:
{
"from": 0,
"size": 10,
"sort": {
"post_date": {
"order": "desc"
}
},
"query": {
"function_score": {
"query": {
"bool": {
"must": [
{
"match_phrase": {
"query": "Something & Something"
}
}
]
}
},
"exp": {
"post_date_gmt": {
"scale": "270d",
"decay": 0.5,
"offset": "90d"
}
},
"score_mode": "avg",
"boost_mode": "sum"
}
},
"post_filter": {
"bool": {
"must": [
{
"terms": {
"post_type.raw": [
"post"
]
}
},
{
"terms": {
"post_status": [
"publish"
]
}
}
]
}
}
}
But this doesn't return any post, and returning hits total 0. Anyone have any idea, or suggestions, what I'm doing wrong ?
match_phrase is very restrictive and in most of cases is recommended to use it inside a should clause to increase the score instead of a must, because it requires the user to type the value exactly as it is.
Example document
POST test_jakub/_doc
{
"query": "Something & Something",
"post_type": {
"raw": "post"
},
"post_status": "publish",
"post_date_gmt": "2021-01-01T12:10:30Z",
"post_date": "2021-01-01T12:10:30Z"
}
With this document searching for "anotherthing Something & Something" will return no results, that's why is a bad idea to use match_phrase here.
You can take 2 approaches
If you need this kind of tight queries take a look to the slop parameter that adds some flexibility to the match_phrase query allowing omit or transpose words in the phrase
Switch to a regular match query (recommended). In most cases this will work fine, but if you want to do extra score to the phrase matches you can add it as a should clause.
POST test_jakub/_search
{
"from": 0,
"size": 10,
"sort": {
"post_date": {
"order": "desc"
}
},
"query": {
"function_score": {
"query": {
"bool": {
"should": [
{
"match_phrase": {
"query": {
"query": "anotherthing something & something",
"slop": 2
}
}
}
],
"must": [
{
"match": {
"query": "anotherthing something & something"
}
}
]
}
},
"exp": {
"post_date_gmt": {
"scale": "270d",
"decay": 0.5,
"offset": "90d"
}
},
"score_mode": "avg",
"boost_mode": "sum"
}
},
"post_filter": {
"bool": {
"must": [
{
"terms": {
"post_type.raw": [
"post"
]
}
},
{
"terms": {
"post_status": [
"publish"
]
}
}
]
}
}
}
Last advice is to avoid using "query" as field name because leads to confusion and will break Kibana autocomplete on Dev Tools.

Azure Time Series Insights query: how to denote a variable such as 'cpu.thermal.average' in the query?

My series' got the value: cpu.thermal.average. I see the values in the TSI explorer for the given range of time.
By executing the following request, for "temperatures" I only get "null".
I am not sure about how to write $event.cpu.thermal.average equivalent.
Any idea?
{
"aggregateSeries": {
"timeSeriesId": [
"MySeries",
],
"searchSpan": {
"from": "2021-03-10T07:00:00Z",
"to": "2021-03-10T20:00:50Z"
},
"interval": "PT3600M",
"filter": null,
"inlineVariables": {
"latitudes": {
"kind": "numeric",
"value": {
"tsx": "$event.latitude"
},
"filter": null,
"aggregation": {
"tsx": "avg($value)"
}
},
"temperatures": {
"kind": "numeric",
"value": {
"tsx": "$event['cpu.thermal.average']"
},
"filter": null,
"aggregation": {
"tsx": "avg($value)"
}
}
},
"projectedVariables": [
"latitudes",
"temperatures"
]
}
}
From here: https://learn.microsoft.com/en-us/rest/api/time-series-insights/reference-time-series-expression-syntax#value-expressions
When accessing nested properties, the Type is required.
It should work if you use $event.cpu.thermal.average.Double

ElasticSearch indexing issue ,failed to parse timestamp

i am new to ELK .
i have created index in Elasticsearch
{
"logstash": {
"aliases": {},
"mappings": {
"log": {
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"norms": false,
"type": "text"
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"fields": {
"keyword": {
"type": "keyword"
}
},
"norms": false,
"type": "text"
}
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"#version": {
"type": "keyword",
"include_in_all": false
},
"activity": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"beat": {
"properties": {
"hostname": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"name": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"version": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
},
"filename": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"host": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"input_type": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"message": {
"type": "text",
"norms": false
},
"offset": {
"type": "long"
},
"source": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"tags": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"timestamp": {
"type": "date",
"include_in_all": false,
"format": "YYYY-MM-DD HH:mm:ss.SSS"
},
"type": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"user": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
},
"settings": {
"index": {
"creation_date": "1488805244467",
"number_of_shards": "1",
"number_of_replicas": "0",
"uuid": "5ijhh193Tr6y_hxaQrW9kg",
"version": {
"created": "5020199"
},
"provided_name": "logstash"
}
}
}
}
Below is my logstash configuration
input{
beats{
port=>5044
}
}filter{
grok{
match=>{"message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] ALL AUDIT: User \[%{GREEDYDATA:user}\] is %{GREEDYDATA:activity} \[%{GREEDYDATA:filename}\] for transfer."}
}
}output{
elasticsearch{
hosts=>"localhost:9200"
index=> "logstash"
}
Sample Data
[2017-03-05 12:37:21.465] ALL AUDIT: User [user1] is opening file [filename1] for transfer.
but when i am loading file through filebeat > logstash > elasticsearch
in elasticsearch i am getting below error
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [timestamp]
Caused by: java.lang.IllegalArgumentException: Invalid format: "2017-03-05T12:36:33.606" is malformed at "12:36:33.606"
at org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187) ~[joda-time-2.9.5.jar:2.9.5]
Please help , what timestamp format should i configure ?
In your timestamp mapping you indicate the format as "format": "YYYY-MM-DD HH:mm:ss.SSS" Here the format you are sending through beats is not the same, check: 2017-03-05T12:36:33.606
That's why Elastic is complaining about the format. Your format should be: "YYYY-MM-DD'T'HH:mm:ss.SSS" (notice the capital T)
See the documentation for more details: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html

Elastic Search Date Parsing Error

I'm pretty new at configuring elastic and I am having problems trying to parse a log date - which seems like it should be a trivial thing to do.
Any insight for a newbie?
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse [Message.LogTime]"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse [Message.LogTime]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Invalid format: \"2015-11-12 01:37:35.490\" is malformed at \" 01:37:35.490\""
}
}
My JSON payload
{
"LoggerType": "ErrorAndInfo",
"Message": {
"LogId": 0,
"LogStatus": 0,
"LogTime": "2015-11-12 01:37:35.490",
"VersionInfo": "",
"AdditionalInformation": null
}
}
Elastic Search Template Mapping
"mappings": {
"log_message" : {
"_all" : { "enabled": false },
"properties": {
"LoggerType" : { "type" : "string" },
"Message" : {
"properties": {
"LogId": { "type" : "integer" },
"LogStatus": { "type" : "integer" },
"LogTime": {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
},
"VersionInfo": {
"type" : "string",
"index" : "not_analyzed"
},
}
}
}
}
}
I figured it out. You will have to re-create your index for the changes to be applied.

Resources