Elastic Search Date Parsing Error - datetime

I'm pretty new at configuring elastic and I am having problems trying to parse a log date - which seems like it should be a trivial thing to do.
Any insight for a newbie?
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse [Message.LogTime]"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse [Message.LogTime]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Invalid format: \"2015-11-12 01:37:35.490\" is malformed at \" 01:37:35.490\""
}
}
My JSON payload
{
"LoggerType": "ErrorAndInfo",
"Message": {
"LogId": 0,
"LogStatus": 0,
"LogTime": "2015-11-12 01:37:35.490",
"VersionInfo": "",
"AdditionalInformation": null
}
}
Elastic Search Template Mapping
"mappings": {
"log_message" : {
"_all" : { "enabled": false },
"properties": {
"LoggerType" : { "type" : "string" },
"Message" : {
"properties": {
"LogId": { "type" : "integer" },
"LogStatus": { "type" : "integer" },
"LogTime": {
"type" : "date",
"format" : "yyyy-MM-dd HH:mm:ss.SSS"
},
"VersionInfo": {
"type" : "string",
"index" : "not_analyzed"
},
}
}
}
}
}

I figured it out. You will have to re-create your index for the changes to be applied.

Related

Firestore Pagination: how to define 'startAt'-cursor for REST?

I am trying to use a cursor and 'startAt', to paginate REST requests to Firestore. According to the Paginate-documentation, the cursor should equal to the last document of the previous query. As the REST-documentation is without an example, I tried to run it by inserting an entire document as cursor in the startAt-object; like this:
POST https://firestore.googleapis.com/v1/PROJECT-NAME/databases/(default)/documents/organizations/testManyInstructions:runQuery
{
"structuredQuery": {
"from": [
{
"collectionId": "instructions"
}
],
"where": {
"fieldFilter": {
"field": {
"fieldPath": "belongsToDepartementID"
},
"op": "EQUAL",
"value": {
"stringValue": "toplevel-document-id"
}
}
},
"orderBy": [
{
"field": {
"fieldPath": "instructionNumber"
},
"direction": "ASCENDING"
}
],
"startAt": {
"values": [{
"document": {
"name": "projects/PROJECT-NAME/databases/(default)/documents/organizations/testManyInstructions/instructions/i0",
"fields": {
"checkbox": {
"booleanValue": false
},
"retrainTimespanDays": {
"integerValue": "365000"
},
"approvedByName": {
"stringValue": ""
},
"instructionNumber": {
"stringValue": "instr. 0"
},
"instructionCurrentRevision": {
"stringValue": "A"
},
"instructionCurrentRevisionPublishingDate": {
"timestampValue": "1999-01-01T00:00:00Z"
},
"instructionFileURL": {
"stringValue": ""
},
"instructionTitle": {
"stringValue": "dummy Title0"
},
"instructionFileUploadDate": {
"timestampValue": "1999-01-01T00:00:00Z"
},
"belongsToDepartementID": {
"stringValue": "toplevel-document-id"
},
"approvedByEmailAdress": {
"stringValue": ""
}
},
"createTime": "2022-02-18T13:55:42.807103Z",
"updateTime": "2022-02-18T13:55:42.807103Z"
}
}
]
},
"limit": 5
}
}
without the "startAt"-Object, the following code works fine and returns 5 documents.
with the "startAt"-Object, this error is returned:
{
"error": {
"code": 400,
"message": "Invalid JSON payload received. Unknown name \"document\" at 'structured_query.start_at.values[0]': Cannot find field.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "structured_query.start_at.values[0]",
"description": "Invalid JSON payload received. Unknown name \"document\" at 'structured_query.start_at.values[0]': Cannot find field."
}
]
}
]
}
}
]
Please advise, how to set the cursor in the startAt-object correctly.
I've run a similar query using offset instead of startAt, so I tried modifying and got it to work. This is the rest api documentation I used.
startAt requires a Cursor object which is an array of Values.
https://firebase.google.com/docs/firestore/reference/rest/v1/StructuredQuery
https://firebase.google.com/docs/firestore/reference/rest/v1/Cursor
https://firebase.google.com/docs/firestore/reference/rest/Shared.Types/ArrayValue#Value
I would have preferred an example as well!
"startAt": {
"values": [{
"stringValue": "Cr"
}]
},
"orderBy": [{
"field": {
"fieldPath": "Summary"
}
}],
Good luck!

Azure Time Series Insights query: how to denote a variable such as 'cpu.thermal.average' in the query?

My series' got the value: cpu.thermal.average. I see the values in the TSI explorer for the given range of time.
By executing the following request, for "temperatures" I only get "null".
I am not sure about how to write $event.cpu.thermal.average equivalent.
Any idea?
{
"aggregateSeries": {
"timeSeriesId": [
"MySeries",
],
"searchSpan": {
"from": "2021-03-10T07:00:00Z",
"to": "2021-03-10T20:00:50Z"
},
"interval": "PT3600M",
"filter": null,
"inlineVariables": {
"latitudes": {
"kind": "numeric",
"value": {
"tsx": "$event.latitude"
},
"filter": null,
"aggregation": {
"tsx": "avg($value)"
}
},
"temperatures": {
"kind": "numeric",
"value": {
"tsx": "$event['cpu.thermal.average']"
},
"filter": null,
"aggregation": {
"tsx": "avg($value)"
}
}
},
"projectedVariables": [
"latitudes",
"temperatures"
]
}
}
From here: https://learn.microsoft.com/en-us/rest/api/time-series-insights/reference-time-series-expression-syntax#value-expressions
When accessing nested properties, the Type is required.
It should work if you use $event.cpu.thermal.average.Double

How to fix a problem with dynamic date templates?

I have a problem with dynamic date tampletes
I'm using ElasticSearch 6.2.4
My steps:
1) Create index with next settings:
PUT /test1
{
"settings": {
"index":{
"number_of_shards" : 9,
"number_of_replicas" : 0,
"max_rescore_window" : 2000000000,
"max_result_window" : 2000000000
}
},
"mappings": {
"files": {
"properties": {
"Дата добавления в БД": {
"type": "date"
}
},
"numeric_detection": true,
"dynamic_templates": [
{
"integers": {
"match_mapping_type": "long",
"mapping": {
"type": "long"
}
}
},
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
},
{
"dates": {
"match_mapping_type": "date",
"mapping": {
"format": "yyyy-MM-dd HH:mm:ss||yyyy/MM/dd HH:mm:ss||yyyyMMdd_HH:mm:ss",
"type": "date"
}
}
}
]
}
}
}
2) Try to put new records (I have only one)
POST /test1/files/_bulk
{"create":{"_index":"test1","_type":"files","_id":"0"}}
{"Дата добавления в БД":"2019/04/12 11:42:21"}
3) So, I have next output:
{
"took": 1,
"errors": true,
"items": [
{
"create": {
"_index": "test1",
"_type": "files",
"_id": "0",
"status": 400,
"error": {
"type": "mapper_parsing_exception",
"reason": "failed to parse [Дата добавления в БД]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Invalid format: \"2019/04/12 11:42:21\" is malformed at \"/04/12 11:42:21\""
}
}
}
}
]
}
I can't understand where is my mistake??
I tried to find some information about this problem in Google, unfortunately, I have no solves of this problem. Maybe, this question is so stupid, but I've already broken my brain.
Please, help me...
I can't fully understand, but this option work:
{
"settings": {
"index":{
"number_of_shards" : 9,
"number_of_replicas" : 0,
"max_rescore_window" : 2000000000,
"max_result_window" : 2000000000
}
},
"mappings": {
"files": {
"dynamic_date_formats": ["yyyy-MM-dd HH:mm:ss","yyyy/MM/dd HH:mm:ss", "yyyyMMdd_HH:mm:ss"],
"numeric_detection": true,
"date_detection": true,
"dynamic_templates": [
{
"integers": {
"match_mapping_type": "long",
"mapping": {
"type": "long"
}
}
},
{
"strings": {
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
}
]
}
}
}
Link to documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/6.2/dynamic-field-mapping.html
Thanks for attention :)

Error using elastic on linux server but no error on windows

When I execute
elastic::Search(index=index,body=body,size=1000,scroll="3m")
on a linux server I receive the following error.
invalid char in json text. <!DOCTYPE HTML PUBLIC "-//W3C//
On windows everything is fine. However, if I execute elastic::Search with a different body, it works. So here is my body.
'{
"_source":["DOC_ID", "DELIVERY_ID",
"CONTRIB_TS", "LANG", "SYS_NOT", "SURVEIL"],
"query": {
"bool": {
"must": [
{"match_phrase": { "CONTENT" : "XXX" }}
],
"filter": [{ "term" : { "DELIVERY_ID" : "100" } },{ "term" : { "SYS_NOT" : "0" } }]
}
},
"highlight": {
"pre_tags" : [""],
"post_tags" : [""],
"fields" : {
"CONTENT": {"fragment_size" : 200}
}
}
}'

How can I JQ transform array into a different array, joint with other data?

I need to transform this:
{"input":{"text":"HI"},"output":{"text":["OK1","TWO"]}}
Into this:
{
"localDB": [
{
"tableName": "Default",
"mode": "append",
"data": [
{
"time": "1511281401.991815",
"message": "HI",
"from": "me"
},
{
"time": "1511281401.991837",
"message": "OK1"
"from": "bot"
}
{
"time": "1511281401.991847",
"message": "TWO"
"from": "bot"
}
]}]}
Is it possible at all?
Key issue here is that number of "records" in the localDB should vary depending on the number of entries in .output.text node. There could be just one text, or three or more.
I tried with this, but it is not quite working:
{
"localDB" : [{
"tableName": "Default",
"mode": "append",
"data": [
{"time" : now|tostring, "message" : .input.text, "from" : "me"},
{"time" : now|tostring, "message" :.output.text, "from" : "bot"}
]
}]
}
I think you are very close. You just need to use .output.text[] and take advantage of how Object Construction behaves when a member expression returns multiple results. Try this:
{
"localDB" : [{
"tableName": "Default",
"mode": "append",
"data": [
{"time" : now|tostring, "message" : .input.text, "from" : "me"},
{"time" : now|tostring, "message" :.output.text[], "from" : "bot"}
]
}]
}
Sample Output
{
"localDB": [
{
"tableName": "Default",
"mode": "append",
"data": [
{
"time": "1511283566.11608",
"message": "HI",
"from": "me"
},
{
"time": "1511283566.116094",
"message": "OK1",
"from": "bot"
},
{
"time": "1511283566.116094",
"message": "TWO",
"from": "bot"
}
]
}
]
}
Try it online!
Here is a filter which uses a settime function to assign different times to the rows:
def settime: reduce range(length) as $i (.; .[$i].time = (now + $i|tostring));
{
"localDB" : [{
"tableName": "Default",
"mode": "append",
"data": [
{"message" : .input.text, "from" : "me"},
{"message" :.output.text[], "from" : "bot"}
] | settime
}]
}
Sample Output
{
"localDB": [
{
"tableName": "Default",
"mode": "append",
"data": [
{
"message": "HI",
"from": "me",
"time": "1511285948.684203"
},
{
"message": "OK1",
"from": "bot",
"time": "1511285949.684243"
},
{
"message": "TWO",
"from": "bot",
"time": "1511285950.684261"
}
]
}
]
}
Try it online!

Resources