Count in savedSearch kibana - kibana

I'm using kibana safesearch to get an extract of data in csv and as I'm quite new to this I came here to get some help .I need to count how many times we got the occurences of product in the json file I use(or i've got a field name "ref" with a values associated but it from 0 to 99 and I need it to be 1 to 1000). Here an example of files I use :
{
"header": {
"salesOrderId": "toto",
"salesOrderDate" : "2021-02-16T19:04:14+01:00",
"salesOrderChangeVersion": "0",
"purchaseOrderRequestId": "uncle",
"requestNumber": "00005",
"channel":"TEST",
"requestStatus": "ORDER_COMPLETE"
},
"reject": [],
"offer": [
{
"offerCode": "3150",
"offerFunctionalVersion": "1.0",
"contract":{
"contractNumber":"1-10000361221",
"contractVersionNumber":"0"
},
"criterion": [
{
"criterionCode": "firstStampPosition",
"criterionValue": "1"
},
{
"criterionCode": "maxWeight",
"criterionValue": "20"
},
{
"criterionCode": "specimenFlag",
"criterionValue": "0"
}
],
"customer": {
"custAccNumber": "777",
"customerType": "PAR"
},
"product": [
{
"productCode": "4L",
"criterion": [
{
"criterionCode": "sheet_formatCode",
"criterionValue": "L12A"
},
{
"criterionCode": "ref",
"criterionValue": "0"
},
{
"criterionCode": "marking_template",
"criterionValue": "stp1_fr"
},
{
"criterionCode": "logo",
"criterionValue": "link here"
},
{
"criterionCode": "weight",
"criterionValue": "21"
},
{
"criterionCode": "stamp_mention",
"criterionValue": "Utilisable par multiple au-delĂ  de 20g"
},
{
"criterionCode": "signedFlag",
"criterionValue": "1"
},
{
"criterionCode": "marking_productLabel",
"criterionValue": "Lettre verte"
},
{
"criterionCode": "SD_originCode",
"criterionValue": "87"
},
{
"criterionCode": "soCode",
"criterionValue": "381"
},
{
"criterionCode": "asCode",
"criterionValue": "A10"
},
{
"criterionCode": "countryCode",
"criterionValue": "250"
}
],
"criteriaGroup": [
{
"criteriaGroupCode": "addressGroup",
"criteriaGroupIndex": "0",
"criterion": [
{
"criterionCode": "receiver_address_name1",
"criterionValue": "M et Mme DUTEST"
},
{
"criterionCode": "receiver_address_add4",
"criterionValue": "33 rue de la force"
},
{
"criterionCode": "receiver_address_zipCode",
"criterionValue": "3417012"
},
{
"criterionCode": "receiver_address_town",
"criterionValue": "Far far away"
},
{
"criterionCode": "receiver_address_countryCode",
"criterionValue": "260"
}
]
},
{
"criteriaGroupCode" : "sheet_Group",
"criteriaGroupIndex" : "0",
"criterion" : [
{
"criterionCode": "sheet_formatCode",
"criterionValue": "L12A"
},
{
"criterionCode": "sheet_SDFlag",
"criterionValue": "1"
},
{
"criterionCode": "sheet_contractFlag",
"criterionValue": "1"
}
]
}
],
"service": []
}
]
}
]
}

So answer I've found was :
try {
return doc['offer.product.productCode'].size();
} catch(Exception e){
return '';
}

Related

Groovy to collect and remove duplicates from a complex json structure

this is my first question at Stack Overflow, so, firstly, Hello colleagues and many thanks in advance.
I have this json input message I'm dealing with, but I cannot find the key to get the message I need for further processing
{
"callId": "70f354ed47e643bc9d1cd6595e018f9b",
"errorCode": 0,
"apiVersion": 2,
"statusCode": 200,
"statusReason": "OK",
"time": "2022-08-01T07:56:34.631Z",
"results": [
{
"UID": "5abc8d08d8e148158610c7c6776c4ad5",
"groups": {
"organizations": [
{
"businessModels": [
{
"keys": [
"Company Code",
"Sales Org",
"Distribution Channel",
"Division"
],
"businessEntities": [
{
"codes": [
"HU50",
"HU50_HU50",
"HU50_HU50_10",
"HU50_HU50_10_10"
]
}
],
"id": "SalesArea_161185"
},
{
"keys": [
"ShiptoInc_SalesArea",
"ShiptoInc_Id"
],
"businessEntities": [
{
"codes": [
"HU50_HU50_10_10",
"100563692"
]
},
{
"codes": [
"HU50_HU50_10_10",
"100563691"
]
}
],
"id": "ShiptoInc_161185"
},
{
"keys": [
"Payer_SalesArea",
"Payer_Id"
],
"businessEntities": [
{
"codes": [
"HU50_HU50_10_10",
"960004763"
]
}
],
"id": "Payer_161185"
}
]
}
]
}
},
{
"UID": "d9f2b591f58e4aeebaa0b88175d4fe3c",
"groups": {
"organizations": [
{
"businessModels": [
{
"keys": [
"Company Code",
"Sales Org",
"Distribution Channel",
"Division"
],
"businessEntities": [
{
"codes": [
"HU50",
"HU50_HU50",
"HU50_HU50_10",
"HU50_HU50_10_10"
]
}
],
"id": "SalesArea_161185"
},
{
"keys": [
"ShiptoInc_SalesArea",
"ShiptoInc_Id"
],
"businessEntities": [
{
"codes": [
"HU50_HU50_10_10",
"100563692"
]
},
{
"codes": [
"HU50_HU50_10_10",
"100563691"
]
}
],
"id": "ShiptoInc_161185"
},
{
"keys": [
"Payer_SalesArea",
"Payer_Id"
],
"businessEntities": [
{
"codes": [
"HU50_HU50_10_10",
"960004763"
]
}
],
"id": "Payer_161185"
}
]
}
]
}
},
{
"UID": "74a9ccbc9b8549d1a7726ac1f77f7ea9",
"groups": {
"organizations": [
{
"businessModels": [
{
"keys": [
"ShiptoInc_SalesArea",
"ShiptoInc_Id"
],
"businessEntities": [
{
"codes": [
"HU50_HU50_10_10",
"100563692"
]
}
],
"id": "ShiptoInc_161185"
}
]
}
]
}
},
{
"UID": "d5ed356a3c2a48568ccacb8d9c7c5506",
"groups": {
"organizations": [
{
"businessModels": [
{
"keys": [
"Company Code",
"Sales Org",
"Distribution Channel",
"Division"
],
"businessEntities": [
{
"codes": [
"HU50",
"HU50_HU50",
"HU50_HU50_10",
"HU50_HU50_10_10"
]
}
],
"id": "SalesArea_161185"
},
{
"keys": [
"ShiptoInc_SalesArea",
"ShiptoInc_Id"
],
"businessEntities": [
{
"codes": [
"HU50_HU50_10_10",
"100563692"
]
},
{
"codes": [
"HU50_HU50_20_20",
"100563692"
]
},
{
"codes": [
"HU50_HU50_10_10",
"100563691"
]
}
],
"id": "ShiptoInc_161185"
},
{
"keys": [
"Payer_SalesArea",
"Payer_Id"
],
"businessEntities": [
{
"codes": [
"HU50_HU50_10_10",
"960004763"
]
}
],
"id": "Payer_161185"
}
]
}
]
}
}
],
"objectsCount": 4,
"totalCount": 4
}
For a known id ("Payer_161185" or "ShiptoInc_161185") and a given value ("100563692") we need to extract all repetitions of businessEntities.codes of all UIDs and after get the list, remove duplicates.
For example, for "ShiptoInc_161185", the desired output would be:
{ "salesAreas": ["HU50_HU50_10_10","HU50_HU50_20_20"]}
This output is the list of salesAreas for the given value 100563692 into all the id = ShiptoInc_161185
Other case that I would like to solve is:
How could I add the id instead of text salesAreas?. Something like this {"Payer_111":["HU50_HU50_10_10","HU50_HU50_30_20"],"Payer_222":["HU40_HU40_10_10","HU20_HU20_30_20"]}. This means the id wouldn't be provided, just the prefix Payer_
Your help is appreciated.
***I solved the second requirement
def data = new JsonSlurper().parseText(body);
def bModelsIdFiltered = data.results.groups.organizations.businessModels
.collect { it[0] }.flatten()
.findAll { it.id.contains('Payer_') }
def nList = [];
def rembM = bModelsIdFiltered.each{
it.businessEntities.codes.each { code ->
nList.add(code.plus(it.id))
}
}
println "nl " + nList;
def codesFiltered = nList
.findAll { '100563692' in it }
return codesFiltered;
If the structure is rigid and i understood the task correctly (namely, you should find codes that have value 100563692 in the same array), you can doing it like that:
class FindCodesSpec extends Specification {
def testString = '''<insert_your_string_here>'''
def flattenOnce(List array) {
return array.inject([]) { res, el -> res + el }
}
def findCodes(String message, String id, String code) {
def data = new JsonSlurper().parseText(message)
def bModelsIdFiltered = data.results.groups.organizations.businessModels
.collect { it[0] }.flatten()
.findAll { it.id == id }
def codesFiltered = flattenOnce(bModelsIdFiltered.businessEntities.codes)
.findAll { code in it }
def uniqueCodes = codesFiltered.flatten().unique() - code
return JsonOutput.toJson(['salesAreas': uniqueCodes])
}
def 'run test'() {
expect:
'''{"salesAreas":["HU50_HU50_10_10","HU50_HU50_20_20"]}''' == findCodes(testString, 'ShiptoInc_161185', '100563692')
}
}

MongoDB - Document Structure to create matrix from multiple value pairs

I am new to NoSQL and MongoDB, so please don't bash. I have used SQL databases in the past, but am now looking to leverage the scalability of NoSQL. One application that comes to mind is the collection of experimental results, where they are serialized in some manner with a start date, end date, part number, serial number, etc. Along with each experiment, there are many "measurements" collected, but the list of measurements may be unique in each experiment.
I am looking for ideas in how to structure the document to achieve the follow tasks:
1) Query based on date ranges, part numbers, serial numbers
2) See resulting table in a "spreadsheet" table
3) Perform statistical calculats, perhaps with R, on the different "measurements"
An example might look like:
[
{
"_id": {
"$oid": "5e680d6063cb144f9d1be261"
},
"StartDate": {
"$date": {
"$numberLong": "1583841600000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842007000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3102",
"Status": "Acceptable",
"Results": [
{
"Sensor": "Pressure",
"Value": "14.68453",
"Units": "PSIA",
"Flag": "1"
},
{
"Sensor": "Temperature",
"Value": {
"$numberDouble": "68.43"
},
"Units": "DegF",
"Flag": {
"$numberInt": "1"
}
},
{
"Sensor": "Velocity",
"Value": {
"$numberDouble": "12.4"
},
"Units": "ft/s",
"Flag": {
"$numberInt": "1"
}
}
]
},
{
"_id": {
"$oid": "5e68114763cb144f9d1be263"
},
"StartDate": {
"$date": {
"$numberLong": "1583842033000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842434000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3103",
"Status": "Acceptable",
"Results": [
{
"Sensor": "Pressure",
"Value": "14.70153",
"Units": "PSIA",
"Flag": "1"
},
{
"Sensor": "Temperature",
"Value": {
"$numberDouble": "68.55"
},
"Units": "DegF",
"Flag": {
"$numberInt": "1"
}
},
{
"Sensor": "Velocity",
"Value": {
"$numberDouble": "12.7"
},
"Units": "ft/s",
"Flag": {
"$numberInt": "1"
}
}
]
},
{
"_id": {
"$oid": "5e68115f63cb144f9d1be264"
},
"StartDate": {
"$date": {
"$numberLong": "1583842464000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842434000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3104",
"Status": "Acceptable",
"Results": [
{
"Sensor": "Pressure",
"Value": "14.59243",
"Units": "PSIA",
"Flag": "1"
},
{
"Sensor": "Weight",
"Value": {
"$numberDouble": "67.93"
},
"Units": "lbf",
"Flag": {
"$numberInt": "1"
}
},
{
"Sensor": "Torque",
"Value": {
"$numberDouble": "122.33"
},
"Units": "ft-lbf",
"Flag": {
"$numberInt": "1"
}
}
]
}
]
Another approach might be:
[
{
"_id": {
"$oid": "5e680d6063cb144f9d1be261"
},
"StartDate": {
"$date": {
"$numberLong": "1583841600000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842007000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3102",
"Status": "Acceptable",
"Pressure (PSIA)" : "14.68453",
"Pressure - Flag": "1",
"Temperature (degF)": "68.43",
"Temperature - Flag": "1",
"Velocity (ft/s)": "12.4",
"Velocity Flag": "1"
},
{
"_id": {
"$oid": "5e68114763cb144f9d1be263"
},
"StartDate": {
"$date": {
"$numberLong": "1583842033000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842434000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3103",
"Status": "Acceptable",
"Pressure (PSIA)" : "14.70153",
"Pressure - Flag": "1",
"Temperature (degF)": "68.55",
"Temperature - Flag": "1",
"Velocity (ft/s)": "12.7",
"Velocity Flag": "1"
},
{
"_id": {
"$oid": "5e68115f63cb144f9d1be264"
},
"StartDate": {
"$date": {
"$numberLong": "1583842464000"
}
},
"EndDate": {
"$date": {
"$numberLong": "1583842434000"
}
},
"PartNumber": "1Z45NP7X",
"SerialNumber": "U84A3104",
"Status": "Acceptable",
"Pressure (PSIA)" : "14.59243",
"Pressure - Flag": "1",
"Weight (lbf)": "67.93",
"Weight - Flag": "1",
"Torque (ft-lbf)": "122.33",
"Torque - Flag": : "1"
}
]
An example table might look like (probably with correct spacing):
StartDate EndDate PartNumber SerialNumber Pressure 'Pressure - Flag' Temperature 'Temperature - Flag' Velocity 'Velocity - Flag' Torque 'Torque - Flag' Weight 'Weight - Flag'
2020-03-10T12:00:00Z 2020-03-10T12:06:47Z 1Z45NP7X U84A3102 14.68453 1 68.43 1 12.4 1 N/A N/A N/A
N/A
2020-03-10T12:07:13Z 2020-03-10T12:13:54Z 1Z45NP7X U84A3103 14.70153 1 68.55 1 12.7 1 N/A N/A N/A
N/A
2020-03-10T12:07:13Z 2020-03-10T12:13:54Z 1Z45NP7X U84A3104 14.59243 1 N/A N/A N/A N/A 67.93 1 122.33
1
Any thoughts on the best structure? In reality, there might be 200+ "sensor values".
Thanks,
DG

What is the JsonPath expression for selecting an object based on sub-object values?

I need to be able to select elements within a JSON document based on the values in sub-elements which, unfortunately, reside in a list of key-value pairs (this is the structure I have to work with). I'm using Jayway 2.4.0.
Here is the JSON document:
{
"topLevelArray": [
{
"elementId": "Elem1",
"keyValuePairs": [
{
"key": "Length",
"value": "10"
},
{
"key": "Width",
"value": "3"
},
{
"key": "Producer",
"value": "alpha"
}
]
},
{
"elementId": "Elem2",
"keyValuePairs": [
{
"key": "Length",
"value": "20"
},
{
"key": "Width",
"value": "8"
},
{
"key": "Producer",
"value": "beta"
}
]
},
{
"elementId": "Elem3",
"keyValuePairs": [
{
"key": "Length",
"value": "15"
},
{
"key": "Width",
"value": "5"
},
{
"key": "Producer",
"value": "beta"
}
]
}
]
}
Here is the JsonPath I thought would do the trick:
$..topLevelArray[ ?( #.keyValuePairs[ ?(#.key=='Producer' && #.value=='beta') ] ) ]
and
$.topLevelArray[ ?( #.keyValuePairs[ ?(#.key=='Producer' && #.value=='beta') ] ) ]
Unfortunately, both are returning everything in the list, including the entry with Producer of 'alpha'. Thx in advance.

Why is google analytics return different number of results with same parameters?

Reporting API v4
I am a developer. I have my clients google adwords and analytics. I have been using adwords and analytics report API for almost a year now.
I am also using https://ga-dev-tools.appspot.com/query-explorer/. The query builder. For comparing if I have retrieve the right amount of data.
I don't know if its an error or not but its acting weird.
Try number 1 using https://ga-dev-tools.appspot.com/query-explorer/
I tried to add 2 metrics and 7 dimensions. This Account ID, contains 1 million data in only 1 month. I know this because I retrieved 1 million in a range of july 25, 2018 - august 16, 2018.
Then, here's the interesting part. I run the query again with the same parameters, it retrieves 5999 results. I did it again it returns 1 million. The results keep changing. I thought its the error in my code but its also happening in the query builder.
What do you guys think? is it a bug or not?
You can try this if you have more than a million data.
I know its not related to coding. But Google Analytics doesn't have forums just like Adwords.
Try number 2 using this link https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/reports/batchGet
this is my request
{
"reportRequests": [
{
"dateRanges": [
{
"endDate": "2018-08-16",
"startDate": "2018-07-16"
}
],
"dimensions": [
{
"name": "ga:dimension2"
},
{
"name": "ga:dimension3"
},
{
"name": "ga:dimension1"
},
{
"name": "ga:adPlacementDomain"
}
],
"pageSize": 5,
"viewId": "********",
"samplingLevel": "LARGE",
"metrics": [
{
"expression": "ga:entrances"
},
{
"expression": "ga:newUsers"
}
],
"includeEmptyRows": true
}
]
}
The return of rowCount is sometimes 2111 and then 1000000.
This my response json with 1million result:
{
"reports": [
{
"columnHeader": {
"dimensions": [
"ga:dimension2",
"ga:dimension3",
"ga:dimension1",
"ga:adPlacementDomain"
],
"metricHeader": {
"metricHeaderEntries": [
{
"name": "ga:entrances",
"type": "INTEGER"
},
{
"name": "ga:newUsers",
"type": "INTEGER"
}
]
}
},
"data": {
"rows": [
{
"dimensions": [
"(other)",
"(other)",
"(other)",
"(other)"
],
"metrics": [
{
"values": [
"120834",
"68730"
]
}
]
},
{
"dimensions": [
"1000025873.1532426892",
"1532426891790.o9z84x",
"2018-07-24T11:08:15.449+01:00",
"unknown"
],
"metrics": [
{
"values": [
"0",
"0"
]
}
]
},
{
"dimensions": [
"1000025873.1532426892",
"1532426891790.o9z84x",
"2018-07-24T11:08:17.589+01:00",
"unknown"
],
"metrics": [
{
"values": [
"0",
"0"
]
}
]
},
{
"dimensions": [
"1000025873.1532426892",
"1532426891790.o9z84x",
"2018-07-24T11:08:31.809+01:00",
"unknown"
],
"metrics": [
{
"values": [
"0",
"0"
]
}
]
},
{
"dimensions": [
"1000025873.1532426892",
"1532427045552.p38pk78",
"2018-07-24T11:09:06.43+01:00",
"unknown"
],
"metrics": [
{
"values": [
"0",
"0"
]
}
]
}
],
"totals": [
{
"values": [
"158626",
"90225"
]
}
],
"rowCount": 1000000,
"minimums": [
{
"values": [
"0",
"0"
]
}
],
"maximums": [
{
"values": [
"120834",
"68730"
]
}
],
"isDataGolden": true
},
"nextPageToken": "5"
}
]
}
another response example when i have less 1million results:
{
"reports": [
{
"columnHeader": {
"dimensions": [
"ga:dimension2",
"ga:dimension3",
"ga:dimension1",
"ga:adPlacementDomain"
],
"metricHeader": {
"metricHeaderEntries": [
{
"name": "ga:entrances",
"type": "INTEGER"
},
{
"name": "ga:newUsers",
"type": "INTEGER"
}
]
}
},
"data": {
"rows": [
{
"dimensions": [
"1002211166.1531434756",
"1531762918308.fjnj7pa6",
"2018-07-16T18:41:58.307+01:00",
"mobileapp::2-com.forsbit.spider"
],
"metrics": [
{
"values": [
"1",
"0"
]
}
]
},
{
"dimensions": [
"1002211166.1531434756",
"1531771001486.jawfrpz8",
"2018-07-16T20:56:41.482+01:00",
"mobileapp::2-com.forsbit.spider"
],
"metrics": [
{
"values": [
"1",
"0"
]
}
]
},
{
"dimensions": [
"1002211166.1531434756",
"1531772475507.7n4w2qzb",
"2018-07-16T21:21:15.503+01:00",
"mobileapp::2-com.forsbit.spider"
],
"metrics": [
{
"values": [
"1",
"0"
]
}
]
},
{
"dimensions": [
"1002211166.1531434756",
"1531859165986.zl7we6a5",
"2018-07-17T21:26:05.977+01:00",
"mobileapp::2-com.forsbit.spider"
],
"metrics": [
{
"values": [
"1",
"0"
]
}
]
},
{
"dimensions": [
"1002211166.1531434756",
"1531859632678.dz7hccsa",
"2018-07-17T21:33:52.673+01:00",
"mobileapp::2-com.forsbit.spider"
],
"metrics": [
{
"values": [
"1",
"0"
]
}
]
},
{
"dimensions": [
"1002211166.1531434756",
"1531861026792.kw71ngx9",
"2018-07-17T21:42:31.667+01:00",
"mobileapp::2-com.forsbit.spider"
],
"metrics": [
{
"values": [
"1",
"0"
]
}
]
}
],
"totals": [
{
"values": [
"2111",
"233"
]
}
],
"rowCount": 2112,
"minimums": [
{
"values": [
"0",
"0"
]
}
],
"maximums": [
{
"values": [
"1",
"1"
]
}
],
"isDataGolden": true
},
"nextPageToken": "6"
}
]
}
I am assuming that you have kept all the queries intact. Double check just to make sure.
Second step would be to check for sampling. Check the field samplingSpaceSizes and samplesReadCounts in the response for sampling. If these fields were not defined that means no sampling was introduced.

Different results between date histogram and date range on Elastic Search

I would like to analyse my logs data with Elastic Search/Kibana and count unique customer by month.
Results are different when I use a date histogram aggregation and date range aggregation.
Here is the date histogram query :
"query": {
"query_string": {
"query": "_type:logs AND created_at:[2015-04-01 TO now]",
"analyze_wildcard": true
}
},
"size": 0,
"aggs": {
"2": {
"date_histogram": {
"field": "created_at",
"interval": "1M",
"min_doc_count": 1
},
"aggs": {
"1": {
"cardinality": {
"field": "customer.id"
}
}
}
}
}
And results :
"aggregations": {
"2": {
"buckets": [
{
"1": {
"value": 595805
},
"key_as_string": "2015-04-01T00:00:00.000Z",
"key": 1427839200000,
"doc_count": 6410438
},
{
"1": {
"value": 647788
},
"key_as_string": "2015-05-01T00:00:00.000Z",
"key": 1430431200000,
"doc_count": 6669555
},...
Here is the date range query :
"query": {
"query_string": {
"query": "_type:logs AND created_at:[2015-04-01 TO now]",
"analyze_wildcard": true
}
},
"size": 0,
"aggs": {
"2": {
"date_range": {
"field": "created_at",
"ranges": [
{
"from": "2015-04-01",
"to": "2015-05-01"
},
{
"from": "2015-05-01",
"to": "2015-06-01"
}
]
},
"aggs": {
"1": {
"cardinality": {
"field": "customer.id"
}
}
}
}
}
And the response :
"aggregations": {
"2": {
"buckets": [
{
"1": {
"value": 592179
},
"key": "2015-04-01T00:00:00.000Z-2015-05-01T00:00:00.000Z",
"from": 1427846400000,
"from_as_string": "2015-04-01T00:00:00.000Z",
"to": 1430438400000,
"to_as_string": "2015-05-01T00:00:00.000Z",
"doc_count": 6411884
},
{
"1": {
"value": 616995
},
"key": "2015-05-01T00:00:00.000Z-2015-06-01T00:00:00.000Z",
"from": 1430438400000,
"from_as_string": "2015-05-01T00:00:00.000Z",
"to": 1433116800000,
"to_as_string": "2015-06-01T00:00:00.000Z",
"doc_count": 6668060
}
]
}
}
In the first case, I have 595,805 for April and 647,788 for May
In the second case, I have 592,179 for April and 616,995 for May
Someone could explain me why I have these differences between these use cases ?
Thank you
I update my first post to add another example
I add another example with fewer data (on 1 day) but with the same issue. Here is the first request with date histogram :
{
"size": 0,
"query": {
"query_string": {
"query": "_type:logs AND logs.created_at:[2015-04-01 TO 2015-04-01]",
"analyze_wildcard": true
}
},
"aggs": {
"2": {
"date_histogram": {
"field": "created_at",
"interval": "1h",
"pre_zone": "00:00",
"pre_zone_adjust_large_interval": true,
"min_doc_count": 1
},
"aggs": {
"1": {
"cardinality": {
"field": "customer.id"
}
}
}
}
}
}
And we can see 660 unique count with 1717 doc count for the first hour :
{
"hits":{
"total":203961,
"max_score":0,
"hits":[
]
},
"aggregations":{
"2":{
"buckets":[
{
"1":{
"value":660
},
"key_as_string":"2015-04-01T00:00:00.000Z",
"key":1427846400000,
"doc_count":1717
},
{
"1":{
"value":324
},
"key_as_string":"2015-04-01T01:00:00.000Z",
"key":1427850000000,
"doc_count":776
},
{
"1":{
"value":190
},
"key_as_string":"2015-04-01T02:00:00.000Z",
"key":1427853600000,
"doc_count":481
}
]
}
}
}
But on the second request with the date range :
{
"size": 0,
"query": {
"query_string": {
"query": "_type:logs AND logs.created_at:[2015-04-01 TO 2015-04-01]",
"analyze_wildcard": true
}
},
"aggs": {
"2": {
"date_range": {
"field": "created_at",
"ranges": [
{
"from": "2015-04-01T00:00:00",
"to": "2015-04-01T01:00:00"
},
{
"from": "2015-04-01T01:00:00",
"to": "2015-04-01T02:00:00"
}
]
},
"aggs": {
"1": {
"cardinality": {
"field": "customer.id"
}
}
}
}
}
}
We can see only 633 unique count with 1717 doc count :
{
"hits":{
"total":203961,
"max_score":0,
"hits":[
]
},
"aggregations":{
"2":{
"buckets":[
{
"1":{
"value":633
},
"key":"2015-04-01T00:00:00.000Z-2015-04-01T01:00:00.000Z",
"from":1427846400000,
"from_as_string":"2015-04-01T00:00:00.000Z",
"to":1427850000000,
"to_as_string":"2015-04-01T01:00:00.000Z",
"doc_count":1717
},
{
"1":{
"value":328
},
"key":"2015-04-01T01:00:00.000Z-2015-04-01T02:00:00.000Z",
"from":1427850000000,
"from_as_string":"2015-04-01T01:00:00.000Z",
"to":1427853600000,
"to_as_string":"2015-04-01T02:00:00.000Z",
"doc_count":776
}
]
}
}
}
Please someone could tell me why ? Thank you
When using the date_histogram aggregation you need to take into account the timezone, which date_range doesn't as it's always using the GMT timezone.
If you look at the long millisecond values in your results, you'll see the following:
For your date histogram, from: 1427839200000 is actually equal to 2015-03-31T22:00:00.000Z which differs from the key_as_string value (i.e. 2015-04-01T00:00:00.000Z) that is formatted according to the GMT timezone.
In your first aggregation, try explicitly specifying the time_zone parameter to be your current timezone (apparently GMT+2) and you should get the same results:
"date_histogram": {
"field": "created_at",
"interval": "1M",
"min_doc_count": 1,
"time_zone": -2
},

Resources