NEST is adding TimeZone while indexing docs in Elasticsearch - datetime

I have a DateTime field in my c# class as below
public DateTime PassedCreatedDate { get; set; }
While indexing it from NEST to elasticssearch, it is saving it along with local timezone. How to avoid this?
"PassedCreatedDate": "2015-08-14T15:50:04.0479046+05:30" //Actual value saved in ES
"PassedCreatedDate": "2015-08-14T15:50:04.047" //Expected value
mapping of PassedCreatedDate in elasticsearch is
"PassedCreatedDate": {
"type": "date",
"format": "dateOptionalTime"
},
I am aware to have a field as string and provide the format in ElasticProperty, but is there any setting to avoid this timezone addition while using datetime field only?

There are two things to change to achieve saving DateTimes without the time zone offset.
Firstly, NEST uses JSON.Net for json serialization, so we need to change the serializer settings on the ElasticClient to serialize DateTimes into the format desired, and interpret those DateTimes as Local kind when deserializing
var settings = new ConnectionSettings(new Uri("http://localhost:9200"));
settings.SetJsonSerializerSettingsModifier(jsonSettings =>
{
jsonSettings.DateFormatString = "yyyy-MM-ddTHH:mm:ss",
jsonSettings.DateTimeZoneHandling = DateTimeZoneHandling.Local
});
var connection = new InMemoryConnection(settings);
var client = new ElasticClient(connection: connection);
Secondly,we need to tell Elasticsearch via mapping, the format of our DateTime for the field(s) in question
"PassedCreatedDate": {
"type": "date",
"format": "yyyy-MM-ddTHH:mm:ss"
},

Related

Crossfilter for Date for string values

I've a JSON model that contains strings instead of dates (the model is generated via T4TS, so I cannot change that).
The code is currently using an expanded model extending the original json, where the dates are recalculated on new fields.
I was wondering if it would be possible to apply the filters on the fields being string without adding that additional step of extending the model.
private makeNumeric(label: string, property: string) {
return {
label: label,
key: property,
prepareDimension: (crossfilter) => (CrossfilterUtils.makeNumeric(crossfilter, property)),
prepareGroup: (dimension) => {
if (!this.values[property]) {
var group = CrossfilterUtils.makeNumericGroup(dimension);
this.values[property] = group;
}
return this.values[property];
},
valuesAreOrdinal: false
};
}
I haven't used the crossfilter library much before and by looking at the documentation I can't seem to reconcile it with the code (heritage code, to put it that way).
The incoming date format looks like this: "2020-10-22T07:26:00Z"
The typescript model I'm working with is like this:
interface MyModel {
...
CreatedDate?: string;
}
Any idea?
The usual pattern in JavaScript is to loop through the data and do any conversions you need:
data.forEach(function(d) {
d.date = new Date(d.date);
d.number = +d.number;
});
const cf = crossfilter(data);
However, if this is not allowed due to TS, you can also make the conversions when creating your dimensions and groups:
const cf = crossfilter(data);
const dateDim = cf.dimension(d => new Date(d.date));
const monthGroup = dateDim.group(date => d3.timeMonth(date))
.reduceSum(d => +d.number);
I find this a little less robust because you have to remember to do this everywhere. It's a little harder to reason about the efficiency since you have to trust that crossfilter uses the accessors sparingly, but I don't recall seeing this be a problem in practice.

How do I suppress CosmosDB "default" info in resultsets?

I want to suppress the CosmosDB information in the following resultset, how can that be done?
{
"id": null,
"_rid": null,
"_self": null,
"_ts": 0,
"_etag": null,
"topLevelCategory": "Shorts,Skirt"
},
This is an extract of course but I dont want to show the ID etc as they serve no purpose in this result but I cannot figure out how to suppress that info.
I expect the following
{
"topLevelCategory": "Shorts,Skirt"
},
Query looks as follows
$"SELECT DISTINCT locales.categories[0] AS topLevelCategory " +
$"FROM c JOIN locales in c.locales " +
$"WHERE locales.country = '{apiInputObject.Locale}' " +
$"AND locales.language = '{apiInputObject.Language}'";
Interesting thing is if I cast the result as a JOBJECT I dont get the system data, I only get it if I createDOcumentQuery as DOcument, so a workaround would be as follows
IQueryable<JObject> queryResultSet = client.CreateDocumentQuery<JObject>(UriFactory.CreateDocumentCollectionUri(databaseName, databaseCollection), parsedQueryObject.SqlStatement, queryOptions);
but that has other async issues but the above does not show the system generate IDs but the below one does
var query = client.CreateDocumentQuery<Document>(UriFactory.CreateDocumentCollectionUri(databaseName, databaseCollection), parsedQueryObject.SqlStatement, queryOptions).AsDocumentQuery();
var result = await query.ExecuteNextAsync<Document>();
These are system-generated properties of items in Cosmos DB.
Surely,you could filter them in the sql: select c.topLevelCategory from c, don't mention them or use select * from c. Filtering in sql is the best method, better than secondary processing of result set.
Update Answer:
Your situation is executing the exact same query the JOBJECT does not show the system data but the Document does.
My explanation as below:
Document Class is a self-contained base class of Document DB .NET package.It has these generate properties:
SDK will try to map the result data one by one to the entity class which you defined in the CreateDocumentQuery<T>.
So actually,you already find the solution.You could define your custom pojo to receive the result data. Just contain the properties you want in that pojo inside like:
class Pojo : Document
{
public string id { get; set; }
public string name { get; set; }
}
That would have both business implications and no more redundant fields.Hope i'm clear on this.

How to map a Firestore date object to a date in elasticsearch

I am using a cloud function to send a Firebase firestore document to elasticsearch for indexing. I am trying to find a way to map a firebase timestamp field to an elasticsearch date field in the index.
The elasticsearch date type mapping supports formats for epoch_millis and epoch_seconds but the firestore date type is an object as follows:
"timestamp": {
"_seconds": 1551833330,
"_nanoseconds": 300000000
},
I could use use the seconds field but will lose the fractional part of the second.
Is there a way map the timestamp object to a date field in the index that calculates the epoch_millis from the _seconds and _nanoseconds fields? I recognize that precision will be lost (nanos to millis).
If you don't mind losing the fractional part of the second, you could set a mapping on your index like this, which is what I ended up doing:
"mappings": {
"date_detection": false,
"dynamic_templates": [
{
"dates": {
"match": ".*_seconds",
"match_pattern": "regex",
"mapping": {
"type": "date",
"format": "epoch_second"
}
}
}
]
}
It will convert any timestamps (even nested in the document) to dates with second precision.

Store Date Format in elasticsearch

I met a problem when I want to add one datetime string into Elasticsearch.
The document is below:
{"LastUpdate" : "2013/07/24 00:00:00"}
This document raised an error which is "NumberFormatException" [For input string: \"20130724 00:00:00\"]
I know that I can use the Date Format in Elasticsearch, but I don't know how to use even I read the document on the website.
{"LastUpdate": {
"properties": {
"type": "date",
"format": "yyyy-MM-dd"}
}
}
and
{"LastUpdate": {
"type": "date",
"format": "yyyy-MM-dd"
}
}
are wrong.
How can I transfer the datetime string into date format in Elasticsearch?
How can I store the datetime string directly into Elasticsearch?
You are nearly there. Set your mapping like this:
{"LastUpdate": {
"type" : "date",
"format" : "yyyy/MM/dd HH:mm:ss"}
}
Read the docs on the date mapping and its options and the date format parameter (one of the options to the date mapping).
Good luck!

In MVC app using JQGrid - how to set user data in the controller action

How do you set the userdata in the controller action. The way I'm doing it is breaking my grid. I'm trying a simple test with no luck. Here's my code which does not work. Thanks.
var dataJson = new
{
total =
page = 1,
records = 10000,
userdata = "{test1:thefield}",
rows = (from e in equipment
select new
{
id = e.equip_id,
cell = new string[] {
e.type_desc,
e.make_descr,
e.model_descr,
e.equip_year,
e.work_loc,
e.insp_due_dt,
e.registered_by,
e.managed_by
}
}).ToArray()
};
return Json(dataJson);
I don't think you have to convert it to an Array. I've used jqGrid and i just let the Json function serialize the object. I'm not certain that would cause a problem, but it's unnecessary at the very least.
Also, your user data would evaluate to a string (because you are sending it as a string). Try sending it as an anonymous object. ie:
userdata = new { test1 = "thefield" },
You need a value for total and a comma between that and page. (I'm guessing that's a typo. I don't think that would compile as is.)
EDIT:
Also, i would recommend adding the option "jsonReader: { repeatitems: false }" to your javascript. This will allow you to send your collection in the "rows" field without converting it to the "{id: ID, cell: [ data_row_as_array ] }" syntax. You can set the property "key = true" in your colModel to indicate which field is the ID. It makes it a lot simpler to pass data to the grid.

Resources