Store Date Format in elasticsearch - datetime

I met a problem when I want to add one datetime string into Elasticsearch.
The document is below:
{"LastUpdate" : "2013/07/24 00:00:00"}
This document raised an error which is "NumberFormatException" [For input string: \"20130724 00:00:00\"]
I know that I can use the Date Format in Elasticsearch, but I don't know how to use even I read the document on the website.
{"LastUpdate": {
"properties": {
"type": "date",
"format": "yyyy-MM-dd"}
}
}
and
{"LastUpdate": {
"type": "date",
"format": "yyyy-MM-dd"
}
}
are wrong.
How can I transfer the datetime string into date format in Elasticsearch?
How can I store the datetime string directly into Elasticsearch?

You are nearly there. Set your mapping like this:
{"LastUpdate": {
"type" : "date",
"format" : "yyyy/MM/dd HH:mm:ss"}
}
Read the docs on the date mapping and its options and the date format parameter (one of the options to the date mapping).
Good luck!

Related

Mappable parameter for date without time

I am implementing an API that has the following requirement:
The due date should be sent in the format without time.
How can this be done? The Integromat docs mention a boolean "time" parameter of the date object, but it doesn't seem to affect anything.
The question is: how do I make this: "spent_on": "2020-02-19T23:00:00.000Z" look like this: "spent_on": "2020-02-20"?
{
"name": "spent_on",
"label": "Spent on the Date",
"type": "date",
"time": false,
"required": true
}
I am sure you figured it out by now, but if now: You have formatDate within Integromat in the text field. Then you would see formatDate(2020-02-19T23:00:00.000Z; YYYY-MM-DD)

How to map a Firestore date object to a date in elasticsearch

I am using a cloud function to send a Firebase firestore document to elasticsearch for indexing. I am trying to find a way to map a firebase timestamp field to an elasticsearch date field in the index.
The elasticsearch date type mapping supports formats for epoch_millis and epoch_seconds but the firestore date type is an object as follows:
"timestamp": {
"_seconds": 1551833330,
"_nanoseconds": 300000000
},
I could use use the seconds field but will lose the fractional part of the second.
Is there a way map the timestamp object to a date field in the index that calculates the epoch_millis from the _seconds and _nanoseconds fields? I recognize that precision will be lost (nanos to millis).
If you don't mind losing the fractional part of the second, you could set a mapping on your index like this, which is what I ended up doing:
"mappings": {
"date_detection": false,
"dynamic_templates": [
{
"dates": {
"match": ".*_seconds",
"match_pattern": "regex",
"mapping": {
"type": "date",
"format": "epoch_second"
}
}
}
]
}
It will convert any timestamps (even nested in the document) to dates with second precision.

AWS AppSync Query to shape response data (Similar to Group By in SQL)

I have one DynamoDB table with all the data I need for the client, however, I want to shape the data the client receives to reduce client-side manipulation.
My Schema:
type StateCounty {
id: ID!
StateName: String
CountyName: String
FIPSST: Int
FIPSCNTY: Int
Penetration: String
Date: String
}
and to return a custom query I have the type:
type Query {
getStateCountybyState(StateName: String): StateCountyConnection
}
This works - and with a simple query
query getStateCountybyState {
getStateCountybyState (StateName: "Delaware") {
items {
StateName
CountyName
Date
}
}
}
the results are returned as expected:
{
"StateName": "Delaware",
"CountyName": "Kent",
"Date": "02-01-2017"
},
{
"StateName": "Delaware",
"CountyName": "Sussex",
"Date": "02-01-2016"
},
{
"StateName": "Delaware",
"CountyName": "New Castle",
"Date": "02-01-2018"
}
etc.
I would like to return the data in the following format:
{
"StateName": "Delaware" {
{ "CountyName": "Kent",
"Date": "02-01-2017"
},
{
"CountyName": "Sussex",
"Date": "02-01-2016"
},
{
"CountyName": "New Castle",
"Date": "02-01-2018"
}
}
}
I have tried adding GroupCounty: [StateCountyGroup] to the schema:
type StateCounty {
id: ID!
StateName: String
CountyName: String
FIPSST: Int
FIPSCNTY: Int
Penetration: String
Date: String
GroupCounty: [StateCountyGroup]
}
and then a reference to that in the query
query getStateCountybyState {
getStateCountybyState (StateName: "Delaware") {
items {
StateName
CountyName
Date
GroupCounty: [StateCountyGroup]
}
}
}
I think my issue is within the resolver - currently, it is configured to use the StateName as a key, but I am not sure how to pass the StateName from the primary query to the subquery.
Resolver:
{
"version" : "2017-02-28",
"operation" : "Query",
"query" : {
"expression" : "StateName = :StateName",
"expressionValues" : {
":StateName" : { "S" : "${context.arguments.StateName}" },
}
},
"index" : "StateName-index-copy",
"select" : "ALL_ATTRIBUTES",
}
Any guidance appreciated - I have gone through the documentation several times, but cannot find an example.
UPDATE
I tried the suggestion below from Richard - and it is definitely on the right track, however, despite multiple variations on the theme, I either return null or the following error (I eliminated some of the county objects returned in the error for brevity):
"message": "Unable to convert set($myresponse = {\n \"Delaware\":
[{SSA=8000, Eligibles=32295, FIPS=10001, StateName=Delaware, SSACNTY=0,
Date=02-01-2016, CountyName=Kent, Enrolled=3066, Penetration=0.0949,
FIPSCNTY=1, FIPSST=10, SSAST=8, id=6865},
{SSA=8010, Eligibles=91332, FIPS=10003, StateName=Delaware, SSACNTY=10, Date=02-01-2016, CountyName=New Castle, Enrolled=10322, Penetration=0.113, FIPSCNTY=3, FIPSST=10, SSAST=8, id=6866},
{SSA=0, Eligibles=10, FIPS=10, StateName=Delaware, SSACNTY=0, Date=02-01-2018, CountyName=Pending County Designation, Enrolled=0, Penetration=0, FIPSCNTY=0, FIPSST=10, SSAST=0, id=325},
{SSA=8000, Eligibles=33371, FIPS=10001, StateName=Delaware, SSACNTY=0, Date=02-01-2017, CountyName=Kent, Enrolled=3603, Penetration=0.108, FIPSCNTY=1, FIPSST=10, SSAST=8, id=3598},
{SSA=8020, Eligibles=58897, FIPS=10005, StateName=Delaware, SSACNTY=20, Date=02-01-2016, CountyName=Sussex, Enrolled=3760, Penetration=0.0638, FIPSCNTY=5, FIPSST=10, SSAST=8, id=6867}) \nnull\n\n to class java.lang.Object."
}
]
}
From reading the above, it sounds like your original query is returning the correct results that you want but not in the response format that you would prefer, as you would like the "StateName" to be a top-level JSON key with the value being a JSON object of the state which you passed in as an argument. Is that accurate? If so then why not use the same query that already works but with a different response template. Something like:
#set($myresponse = {
"$ctx.args.StateName": $ctx.result.items
})
$util.toJson($myresponse)
Note that $myresponse isn't exactly the same as you had above as your example with "stateName" : "Delaware" { ... } wasn't completely valid JSON so I didn't want to make an assumption on what a good structure would be, but the point remains if you're already getting the proper results from your query I would just try to change the structure of your GraphQL results.
Now if I misread the above and you're NOT getting the proper results from the query, the other way that I could read your statement of "primary query to the subquery" is that you're trying to apply an additional "filter" to your query results. If that is the case then you need something like this:
{
"version" : "2017-02-28",
"operation" : "Query",
"query" : {
"expression" : "StateName = :StateName",
"expressionValues" : {
":StateName" : { "S" : "${context.arguments.StateName}" },
}
},
"index" : "StateName-index-copy",
"select" : "ALL_ATTRIBUTES",
"filter" : {
"expression" : "#population >= :population",
"expressionNames" : {
"#population" : "population"
},
"expressionValues" : {
":population" : $util.dynamodb.toDynamoDBJson($ctx.args.population)
}
}
}
I used an example here where maybe your query also needed to filter by the population size in each county. This may not be representative of what you're looking for but hopefully it helps.
EDITED WITH MORE INFORMATION 4/16/18
I've written up more information on this in a step-by-step manner, to go through the concepts in pieces.
The key here is not just the response template, but also the fields that you're requesting to be returned (as this is the nature of GraphQL). Let's walk through this by way of example. Now that you're returning an individual item with GraphQL (since your response template is converting an array to a single item) so you'll need to change the expected GraphQL query response type. Suppose you have a GraphQL type in your schema like this:
type State {
id: ID!
population: String!
governor: String!
}
type Query {
allStates: [State]
}
If you just convert the response in the template as above you'll see an error like "type mismatch error, expected type LIST" if you run something like this:
query {
allStates{
id
population
}
}
That's because your response is no longer returning the individual items. Instead you'll need to change the GraphQL response type [State] to match what your template conversion is doing State like so:
type State {
StateName: String
}
type Query {
allStates: State
}
Now if your resolver request template is doing something that returns a list of items (like a DynamoDB scan or Query) you can convert the list to a single item in the response template like so:
#set($convert = {"StateName" : $ctx.result.items })
$util.toJson($convert)
Then run the following GraphQL query:
query {
allStates{
StateName
}
}
And you'll get a single object containing an array of your results back:
{
"data": {
"allStates": {
"StateName": "[{id=1, population=10000, governor=John Smith}]"
}
}
}
However while this might be pointing out the errors you are having, this is returning a StateName and from your original question I think you are looking to do a bit more by combining records in the response for some optimization, along with some potential filtering. One way to do this would be to create an array (or you could create a map {}) and populate it based on some conditional. For example modify your query to have a StateName as an argument:
type Query {
allStates(StateName: String!): Post
}
Then you can filter on this in the resolver response template, by using a #foreach and an #if() conditional, then calling .add() only if items in the response are for the state which you requested:
#set($convert = {"StateName" : [] })
#foreach($item in $ctx.result.items)
#if($item["StateName"]=="$ctx.args.StateName")
$util.qr($convert.get("StateName").add("$item"))
#end
#end
$util.toJson($convert)
So now you could run something like this:
query {
allStates(StateName:"Texas"){
StateName
}
}
And this will give you back just the results for that specific state which you passed as an argument. But you'll notice the selection set of the query is StateName. You could introduce a bit more flexibility by having the possible states listed in your GraphQL type:
type State {
StateName: String
Seattle: String
Texas: String
}
Now you alter your resolver response template to use the argument for building up the return array since it can specify this in the selection set:
#set($convert = {"$ctx.args.StateName" : [] })
#foreach($item in $ctx.result.items)
#if($item["StateName"]=="$ctx.args.StateName")
$util.qr($convert.get("$ctx.args.StateName").add("$item"))
#end
#end
$util.toJson($convert)
So I can run this query:
query {
allPosts(StateName:"Seattle"){
Seattle
}
}
And I get back my result. Note though that passing Seattle as the argument but requesting back Texas:
query {
allPosts(StateName:"Seattle"){
Texas
}
}
This will not work as the response object you created in your map was Seattle: [...] but you had Texas as the selection set.
The final thing that you might want to do is have multiple states returned, which you could do by building up one giant map keyed by the state name, or maybe it's done using the arguments or the selection set through adding state names to the return type as demonstrated above. That's up to you so I'm not sure how you'll want that but hopefully this demonstrates how you can manipulate the responses to meet your needs.

Converting a string date to a Date field using scripted fields in kibana

Hi I am working on ELK stack. I have a date in the form of a string like below:
"23/Nov/2017:02:35:02 +0000"
Now I want to use scripted fields in kibana to convert the string date time to a date field.
Anyone can help me with what to put in the script? or How can I go about it?
So for your case lets say the date is in the field { logdate:"23/Nov/2017:02:35:02 +0000" }
In order to convert the logdate(string) to a timestamp value, we can use a Logstash Date filter
In our case, the date filter should look something like below, this filter will parse the date time and save it to the #timestamp filed which is default if you want to save it to a particular field used the target setting
filter {
date {
match => [ "logdate", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
Let me know if the filter works for you

NEST is adding TimeZone while indexing docs in Elasticsearch

I have a DateTime field in my c# class as below
public DateTime PassedCreatedDate { get; set; }
While indexing it from NEST to elasticssearch, it is saving it along with local timezone. How to avoid this?
"PassedCreatedDate": "2015-08-14T15:50:04.0479046+05:30" //Actual value saved in ES
"PassedCreatedDate": "2015-08-14T15:50:04.047" //Expected value
mapping of PassedCreatedDate in elasticsearch is
"PassedCreatedDate": {
"type": "date",
"format": "dateOptionalTime"
},
I am aware to have a field as string and provide the format in ElasticProperty, but is there any setting to avoid this timezone addition while using datetime field only?
There are two things to change to achieve saving DateTimes without the time zone offset.
Firstly, NEST uses JSON.Net for json serialization, so we need to change the serializer settings on the ElasticClient to serialize DateTimes into the format desired, and interpret those DateTimes as Local kind when deserializing
var settings = new ConnectionSettings(new Uri("http://localhost:9200"));
settings.SetJsonSerializerSettingsModifier(jsonSettings =>
{
jsonSettings.DateFormatString = "yyyy-MM-ddTHH:mm:ss",
jsonSettings.DateTimeZoneHandling = DateTimeZoneHandling.Local
});
var connection = new InMemoryConnection(settings);
var client = new ElasticClient(connection: connection);
Secondly,we need to tell Elasticsearch via mapping, the format of our DateTime for the field(s) in question
"PassedCreatedDate": {
"type": "date",
"format": "yyyy-MM-ddTHH:mm:ss"
},

Resources