Kibana KQL - finding all log statements when parameter value is greater than 2 - kibana

I'm writing a KQL to build Kibana Visualize. I've build a query to find my expected result but it's not perfect.
Points to be noted -
Data is logger messages, not json
I searched a lot but most of answers and stackoverflow suggestions were for json data
My queries are on "message" field
Expected Result - All log messages which have noOfFlexJobs>2
Here is my Query-1 -
message:"thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32" and message:"noOfFlexJobs="
Query-1 Result -
Time message
Jan 28, 2021 # 09:20:14.503 2021-01-28T09:20:14.503-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1480876, noOfJobPrefs=0, noOfFlexJobs=0
Jan 28, 2021 # 09:20:14.486 2021-01-28T09:20:14.486-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=a787754, noOfJobPrefs=0, noOfFlexJobs=1
Jan 28, 2021 # 09:20:14.470 2021-01-28T09:20:14.470-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1478669, noOfJobPrefs=0, noOfFlexJobs=1
Jan 28, 2021 # 09:20:14.454 2021-01-28T09:20:14.454-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1478668, noOfJobPrefs=0, noOfFlexJobs=0
Jan 28, 2021 # 09:20:14.443 2021-01-28T09:20:14.443-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1278828, noOfJobPrefs=0, noOfFlexJobs=3
Jan 28, 2021 # 09:20:14.418 2021-01-28T09:20:14.418-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1472766, noOfJobPrefs=0, noOfFlexJobs=4
Jan 28, 2021 # 09:20:14.391 2021-01-28T09:20:14.391-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1478985, noOfJobPrefs=0, noOfFlexJobs=5
Jan 28, 2021 # 09:20:14.380 2021-01-28T09:20:14.379-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1472442, noOfJobPrefs=0, noOfFlexJobs=11
Jan 28, 2021 # 09:20:14.357 2021-01-28T09:20:14.357-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1502372, noOfJobPrefs=0, noOfFlexJobs=0
Jan 28, 2021 # 09:20:14.352 2021-01-28T09:20:14.352-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1477010, noOfJobPrefs=0, noOfFlexJobs=0
Jan 28, 2021 # 09:20:14.342 2021-01-28T09:20:14.342-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1467206, noOfJobPrefs=0, noOfFlexJobs=16
To get desired result I have updated my query-
Query-2
message:"thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32" and (message:"noOfFlexJobs=3" or message:"noOfFlexJobs=4" or message:"noOfFlexJobs=5")
Query-2 Result
Time message
Jan 28, 2021 # 09:20:14.443 2021-01-28T09:20:14.443-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1278828, noOfJobPrefs=0, noOfFlexJobs=3
Jan 28, 2021 # 09:20:14.418 2021-01-28T09:20:14.418-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1472766, noOfJobPrefs=0, noOfFlexJobs=4
Jan 28, 2021 # 09:20:14.391 2021-01-28T09:20:14.391-0600 level=INFO, thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32, cat=EmployeeJobsScript, [] msg=Employee jobs data for scriptName=getEmployeeNumbersForFlexJob scriptStatus=RUNNING executionId=2b80ac18-d97c-4af0-a14b-53b14f1bbc32 EmployeeNumber=A1478985, noOfJobPrefs=0, noOfFlexJobs=5
I understand why I'm getting only 3 rows, If I'll add remaining query parameters for 6,7,....etc I'll get my desired output. But I'm not sure what will be the max value for noOfFlexJobs
I tried message:"noOfFlexJobs=">2 but it didn't work.
Is it possible to query on message statements?
Is there a way to find all statements which has noOfFlexJobs>2?
Thanks! in advance.

I have figured it out. It could be done using not keyword in the KQL.
So answers is:
message:"thread=getEmployeeNumbersForFlexJob-2b80ac18-d97c-4af0-a14b-53b14f1bbc32" and not message:"noOfFlexJobs=0" and not message:"noOfFlexJobs=1"

Related

Zendesk Maxwell crashing/going down

We have maxwell running on docker in our production and it is configured to listen to changes on MariadDb tables and push them to a kafka topic. Our MariadDb has some 10-12 tables and we have configured maxwell to listen to only 4 tables and all these 4 tables are heavily used (number of writes is too high).
Recently we have seen that our Maxwell is going down, following are the few error logs we found when maxwell is going down
Oct 28, 2022 # 06:10:40.037 06:10:40,037 ERROR MysqlParserListener - (parse SET STATEMENT max_statement_time = 60 FOR flush table)
Oct 28, 2022 # 06:10:40.040 06:10:40,040 ERROR SchemaChange - Error parsing SQL: 'SET STATEMENT max_statement_time=60 FOR flush table'
Oct 28, 2022 # 06:10:40.047 06:10:40,045 ERROR AbstractSchemaStore - Error on bin log position Position[BinlogPosition[mysql-bin-changelog.049610:3209], lastHeartbeat=1666898518069]
As the last error log says Error on bin log position Position[BinlogPosition[mysql-bin-changelog.049610:3209], so to verify whether bin log file with name mysql-bin-changelog.049610:3209 exists or not we ran SHOW binary logs; and we found that file in list outputted by above command
Please help me in understanding what is going wrong here and help in solving the issue
Environment Details
OS: Centos
Platform - AWS instance(T4g Medium)
CPU - 2 CPU
RAM - 4GB
Maxwell Version - v1.37.7
Note: We are not handling DDL & our log retention is 3 days
Detailed logs that was logged when Maxwell was going down, sorry for posting huge logs :)
Oct 28, 2022 # 06:10:39.084 06:10:39,082 INFO ProducerConfig - ProducerConfig values:
Oct 28, 2022 # 06:10:39.146 06:10:39,146 INFO AppInfoParser - Kafka commitId : aaa7af6d4a11b29d
Oct 28, 2022 # 06:10:39.146 06:10:39,146 INFO AppInfoParser - Kafka version : 1.0.0
Oct 28, 2022 # 06:10:39.188 06:10:39,187 INFO Maxwell - Maxwell v1.37.7 is booting (MaxwellKafkaProducer), starting at Position[BinlogPosition[mysql-bin-changelog.049610:3136], lastHeartbeat=1666898518069]
Oct 28, 2022 # 06:10:39.377 06:10:39,377 INFO MysqlSavedSchema - Restoring schema id 22 (last modified at Position[BinlogPosition[mysql-bin-changelog.046940:91534], lastHeartbeat=1666103424529])
Oct 28, 2022 # 06:10:39.502 06:10:39,502 INFO MysqlSavedSchema - Restoring schema id 1 (last modified at Position[BinlogPosition[mysql-bin-changelog.040524:418929], lastHeartbeat=0])
Oct 28, 2022 # 06:10:39.555 06:10:39,555 INFO MysqlSavedSchema - beginning to play deltas...
Oct 28, 2022 # 06:10:39.576 06:10:39,576 INFO MysqlSavedSchema - played 21 deltas in 15ms
Oct 28, 2022 # 06:10:39.601 06:10:39,601 INFO BinlogConnectorReplicator - Setting initial binlog pos to: mysql-bin-changelog.049610:3136
Oct 28, 2022 # 06:10:39.605 06:10:39,605 INFO MaxwellHTTPServer - Maxwell http server starting
Oct 28, 2022 # 06:10:39.607 06:10:39,607 INFO MaxwellHTTPServer - Maxwell http server started on port 8080
Oct 28, 2022 # 06:10:39.649 06:10:39,649 INFO BinlogConnectorReplicator - Binlog connected.
Oct 28, 2022 # 06:10:39.649 06:10:39,646 INFO BinaryLogClient - Connected to mariadb-nlp.app.engati.local:3306 at mysql-bin-changelog.049610/3136 (sid:6379, cid:1560183)
Oct 28, 2022 # 06:10:39.669 06:10:39,669 INFO log - Logging initialized #2288ms to org.eclipse.jetty.util.log.Slf4jLog
Oct 28, 2022 # 06:10:39.902 06:10:39,902 INFO Server - jetty-9.4.41.v20210516; built: 2021-05-16T23:56:28.993Z; git: 98607f93c7833e7dc59489b13f3cb0a114fb9f4c; jvm 11.0.15+10
Oct 28, 2022 # 06:10:40.037 06:10:40,037 ERROR MysqlParserListener - (parse SET STATEMENT max_statement_time = 60 FOR flush table)
Oct 28, 2022 # 06:10:40.040 06:10:40,040 ERROR SchemaChange - Error parsing SQL: 'SET STATEMENT max_statement_time=60 FOR flush table'
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.Maxwell.main(Maxwell.java:336)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.Maxwell.start(Maxwell.java:227)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.Maxwell.startInner(Maxwell.java:301)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.util.RunLoopProcess.runLoop(RunLoopProcess.java:34)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.work(BinlogConnectorReplicator.java:228)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.getRow(BinlogConnectorReplicator.java:723)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.processQueryEvent(BinlogConnectorReplicator.java:400)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.processQueryEvent(BinlogConnectorReplicator.java:378)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.schema.MysqlSchemaStore.processSQL(MysqlSchemaStore.java:102)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.schema.AbstractSchemaStore.resolveSQL(AbstractSchemaStore.java:49)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.schema.ddl.SchemaChange.parse(SchemaChange.java:108)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.schema.ddl.SchemaChange.parseSQL(SchemaChange.java:94)
Oct 28, 2022 # 06:10:40.047 at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
Oct 28, 2022 # 06:10:40.047 at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:17)
Oct 28, 2022 # 06:10:40.047 at com.zendesk.maxwell.schema.ddl.MysqlParserListener.visitErrorNode(MysqlParserListener.java:93)
Oct 28, 2022 # 06:10:40.047 com.zendesk.maxwell.schema.ddl.MaxwellSQLSyntaxError: SET
Oct 28, 2022 # 06:10:40.047 06:10:40,045 ERROR AbstractSchemaStore - Error on bin log position Position[BinlogPosition[mysql-bin-changelog.049610:3209], lastHeartbeat=1666898518069]
Oct 28, 2022 # 06:10:40.047 06:10:40,044 INFO ContextHandler - Started o.e.j.s.ServletContextHandler#5628733f{/,null,AVAILABLE}
Oct 28, 2022 # 06:10:40.075 06:10:40,074 INFO Server - Started #2700ms
Oct 28, 2022 # 06:10:40.075 06:10:40,074 INFO AbstractConnector - Started ServerConnector#7152d3c1{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
Oct 28, 2022 # 06:10:40.129 06:10:40,129 INFO BinlogConnectorReplicator - Binlog disconnected.
Oct 28, 2022 # 06:10:40.149 06:10:40,149 INFO TaskManager - Stopping 5 tasks
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.schema.ddl.SchemaChange.parseSQL(SchemaChange.java:94) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28) ~[antlr4-runtime-4.8-1.jar:4.8-1]
Oct 28, 2022 # 06:10:40.187 at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:17) ~[antlr4-runtime-4.8-1.jar:4.8-1]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.schema.ddl.MysqlParserListener.visitErrorNode(MysqlParserListener.java:93) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 com.zendesk.maxwell.schema.ddl.MaxwellSQLSyntaxError: SET
Oct 28, 2022 # 06:10:40.187 06:10:40,152 ERROR TaskManager - cause:
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.Maxwell.main(Maxwell.java:336) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.Maxwell.start(Maxwell.java:227) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.Maxwell.startInner(Maxwell.java:301) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.util.RunLoopProcess.runLoop(RunLoopProcess.java:34) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.work(BinlogConnectorReplicator.java:228) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.getRow(BinlogConnectorReplicator.java:723) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.processQueryEvent(BinlogConnectorReplicator.java:400) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.processQueryEvent(BinlogConnectorReplicator.java:378) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.schema.MysqlSchemaStore.processSQL(MysqlSchemaStore.java:102) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.schema.AbstractSchemaStore.resolveSQL(AbstractSchemaStore.java:49) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.187 at com.zendesk.maxwell.schema.ddl.SchemaChange.parse(SchemaChange.java:108) ~[maxwell-1.37.7.jar:1.37.7]
Oct 28, 2022 # 06:10:40.189 06:10:40,189 INFO TaskManager - Stopping: com.zendesk.maxwell.schema.PositionStoreThread#51b3ee1b
Oct 28, 2022 # 06:10:40.191 06:10:40,189 INFO TaskManager - Stopping: com.zendesk.maxwell.producer.MaxwellKafkaProducerWorker#5ec6ffb6
Oct 28, 2022 # 06:10:40.192 06:10:40,192 INFO KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
Oct 28, 2022 # 06:10:40.201 06:10:40,201 INFO TaskManager - Stopping: com.zendesk.maxwell.monitoring.MaxwellHTTPServerWorker#1490968e
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.Maxwell.main(Maxwell.java:336)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.Maxwell.start(Maxwell.java:227)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.Maxwell.startInner(Maxwell.java:301)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.util.RunLoopProcess.runLoop(RunLoopProcess.java:34)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.work(BinlogConnectorReplicator.java:228)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.getRow(BinlogConnectorReplicator.java:723)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.processQueryEvent(BinlogConnectorReplicator.java:400)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.replication.BinlogConnectorReplicator.processQueryEvent(BinlogConnectorReplicator.java:378)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.schema.MysqlSchemaStore.processSQL(MysqlSchemaStore.java:102)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.schema.AbstractSchemaStore.resolveSQL(AbstractSchemaStore.java:49)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.schema.ddl.SchemaChange.parse(SchemaChange.java:108)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.schema.ddl.SchemaChange.parseSQL(SchemaChange.java:94)
Oct 28, 2022 # 06:10:40.201 at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:28)
Oct 28, 2022 # 06:10:40.201 at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:17)
Oct 28, 2022 # 06:10:40.201 at com.zendesk.maxwell.schema.ddl.MysqlParserListener.visitErrorNode(MysqlParserListener.java:93)
Oct 28, 2022 # 06:10:40.201 com.zendesk.maxwell.schema.ddl.MaxwellSQLSyntaxError: SET
Oct 28, 2022 # 06:10:40.201 06:10:40,199 INFO TaskManager - Stopping: com.zendesk.maxwell.replication.BinlogConnectorReplicator#5a1d68e0
Oct 28, 2022 # 06:10:40.201 06:10:40,198 INFO TaskManager - Stopping: com.zendesk.maxwell.bootstrap.BootstrapController#3857c48d
Oct 28, 2022 # 06:10:40.220 06:10:40,219 INFO AbstractConnector - Stopped ServerConnector#7152d3c1{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
Oct 28, 2022 # 06:10:40.223 06:10:40,223 INFO ContextHandler - Stopped o.e.j.s.ServletContextHandler#5628733f{/,null,STOPPED}
ct 28, 2022 # 06:10:41.637 06:10:41,636 INFO TaskManager - Stopped all tasks
The java backtrace shows this is an error in Maxwell. This was reported as issue 1931.
The issue is that Maxwell in the v1.37.7 doesn't parse SET STATEMENT ... FOR against a blacklist of SQL like FLUSH that its meant to take no action on.
Due to a very responsive upstream, this is is now fixed in v1.39.2.

How can I create a CASE Statement BETWEEN Two Dates?

I am looking to do a CASE statement in a calculated field to identify dates as holidays, and have tried:
CASE
WHEN
FORMAT_DATETIME("%Y%M%D", date) BETWEEN 20190218 AND 20190219 OR
FORMAT_DATETIME("%Y%M%D", date) BETWEEN 20220220 AND 20220222
THEN 'presidents_day'
ELSE NULL
END
I am having trouble formatting the date correctly to use the BETWEEN statement and I am getting the error:
Invalid formula - Operator "BETWEEN" doesn't support TEXT BETWEEN NUMBER AND NUMBER. Operator "BETWEEN" supports ANY BETWEEN ANY AND ANY.
Data Set (Google Sheets):
date
17 Feb 2019
18 Feb 2019
19 Feb 2019
20 Feb 2019
19 Feb 2022
20 Feb 2022
21 Feb 2022
22 Feb 2022
23 Feb 2022
Expected output:
date
Date_CASE
Feb 17, 2019
Feb 18, 2019
presidents_day
Feb 19, 2019
presidents_day
Feb 20, 2019
Feb 19, 2022
Feb 20, 2022
presidents_day
Feb 21, 2022
presidents_day
Feb 22, 2022
presidents_day
Feb 23, 2022
Google Data Studio report
Using the BETWEEN operator in conjunction with the DATE function ("Creates a date" in the YYYY, MM, DD format) would create the required CASE statement (where date represents the date field):
CASE
WHEN
date BETWEEN DATE(2019, 02, 18) AND DATE(2019, 02, 19) OR
date BETWEEN DATE(2022, 02, 20) AND DATE(2022, 02, 22)
THEN "presidents_day"
ELSE NULL
END
Publicly editable Google Data Studio report (embedded Google Sheets data source) and a GIF to elaborate:

How can I write a shell scripts that calls another script for each day starting from start date upto current date [duplicate]

This question already has answers here:
How to loop through dates using Bash?
(10 answers)
Closed 1 year ago.
How can I write a shell scripts that calls another script for each day
i.e currently have
#!/bin/sh
./update.sh 2020 02 01
./update.sh 2020 02 02
./update.sh 2020 02 03
but I want to just specify start date (2020 02 01) and let is run update.sh for every day upto current date, but don't know how to manipulate date in shell script.
I made a stab at it, but rather messy, would prefer if it could process date itself.
#!/bin/bash
for j in {4..9}
do
for k in {1..9}
do
echo "update.sh" 2020 0$j 0$k
./update.sh 2020 0$j 0$k
done
done
for j in {10..12}
do
for k in {10..31}
do
echo "update.sh" 2020 $j $k
./update.sh 2020 $j $k
done
done
for j in {1..9}
do
for k in {1..9}
do
echo "update.sh" 2021 0$j 0$k
./update.sh 2021 0$j 0$k
done
done
for j in {1..9}
do
for k in {10..31}
do
echo "update.sh" 2021 0$j $k
./update.sh 2021 0$j $k
done
done
You can use date to convert your input dates into seconds in order to compare. Also use date to add one day.
#!/bin/bash
start_date=$(date -I -d "$1") # Input in format yyyy-mm-dd
end_date=$(date -I) # Today in format yyyy-mm-dd
echo "Start: $start_date"
echo "Today: $end_date"
d=$start_date # In case you want start_date for later?
end_d=$(date -d "$end_date" +%s) # End date in seconds
while [ $(date -d "$d" +%s) -le $end_d ]; do # Check dates in seconds
# Replace `echo` in the below with your command/script
echo ${d//-/ } # Output the date but replace - with [space]
d=$(date -I -d "$d + 1 day") # Next day
done
In this example, I use echo but replace this with the path to your update.sh.
Sample output:
[user#server:~]$ ./dateloop.sh 2021-08-29
Start: 2021-08-29
End : 2021-09-20
2021 08 29
2021 08 30
2021 08 31
2021 09 01
2021 09 02
2021 09 03
2021 09 04
2021 09 05
2021 09 06
2021 09 07
2021 09 08
2021 09 09
2021 09 10
2021 09 11
2021 09 12
2021 09 13
2021 09 14
2021 09 15
2021 09 16
2021 09 17
2021 09 18
2021 09 19
2021 09 20

Cumulative Time Variable by Combining M/D/Y and NBA Game Clock data

I am trying to create a cumulative time variable for an NBA shot log dataset which will combine three different measurements for the passage of time. I need to use 12-Game Clock in order to determine the time of a shot for a given NBA player since a quarter in the NBA is 12 minutes. Following the same logic, a shot in the second quarter with a game clock of 11:00 would correspond to a cumulative time of 12+(12-11)= 13 minutes. AM/PM does not exist in the game clock variable- it simply represents how many minutes and seconds have passed in the quarter.
Date
Quarter
Game Clock (Min:Sec)
OCT 29, 2014
1
11:01
OCT 29, 2014
3
2:42
OCT 30, 2014
1
1:58
NOV 01, 2014
2
1:15
Desired Output:
Cumulative Time
00:00:59
00:45:58
24:10:02
72:34:45
Please let me know if you need more information. Dataset: https://www.kaggle.com/dansbecker/nba-shot-logs
Thank you in advance.
#tedscr working with times-only can be confusing in R. The package {lubridate} comes with 3 different types, i.e. interval, duration, and periods. For the following I am using the {hms} packages that helps with formatting and parsing times and working with it as a period (:= hms independent of a [start]date).
Note: Under the hood, we work with seconds. Thus, you could also coerce all you have to numerical seconds or difftime and work with this.
To explain to you what is happening, I create a new column for each step.
You may want to combine this in a single operation to your liking.
library(hms)
library(lubridate)
library(dplyr)
#----------------------- data -----------------------------
nba <- tibble::tribble(
~Date, ~Quarter, ~`Game.Clock.(Min:Sec)`,
"OCT 29, 2014", 1L, "11:01",
"OCT 29, 2014", 3L, "2:42",
"OCT 30, 2014", 1L, "1:58",
"NOV 01, 2014", 2L, "1:15"
)
quarter <- hms::hms(minutes = 12) # to create a "period" of 12 minutes
nba %>%
mutate(
#---- determine the time played in the current quarter
#---- as Game.Clock is a character emulating mm::ss add 00: to have hms!
GameClockPlayed = quarter - hms::parse_hms(paste0("00:", `Game.Clock.(Min:Sec)`) )
#---- simply add the previous quarters played to the current played time
#---- note: this returns "seconds"
, CumGameClock = (Quarter * quarter) + GameClockPlayed
#---- use lubridate to nicely format the seconds played
, CumGameClock2 = lubridate::seconds_to_period(CumGameClock))
)
This gives you:
Date Quarter `Game.Clock.(Min:Sec)` GameClockPlayed CumGameClock CumGameClock2
<chr> <int> <chr> <drtn> <drtn> <Period>
1 OCT 29, 2014 1 11:01 59 secs 779 secs 12M 59S
2 OCT 29, 2014 3 2:42 558 secs 2718 secs 45M 18S
3 OCT 30, 2014 1 1:58 602 secs 1322 secs 22M 2S
4 NOV 01, 2014 2 1:15 645 secs 2085 secs 34M 45S
If you need to do further math and the hms/lubirdate period construction is too cumbersome, you can apply as.numeric() to your period object. Likewise for the final presentation, you can coerce it back to the character formatting.

GridView Layout/Output

I have an Website using ASP.Net 2.0 with SQL Server as Database and C# 2005 as programming language. In one of the pages I have a GridView with following layout.
Date -> Time -> QtyUsed
The sample values are as follows: (Since this GridView/Report is generated for a specific month only, I have extracted and displaying only the Day part of the date ignoring the month and year part.
01 -> 09:00 AM -> 05
01 -> 09:30 AM -> 03
01 -> 10:00 AM -> 09
02 -> 09:00 AM -> 10
02 -> 09:30 AM -> 09
02 -> 10:00 AM -> 11
03 -> 09:00 AM -> 08
03 -> 09:30 AM -> 09
03 -> 10:00 AM -> 12
Now the user wants the layout to be like:
Time 01 02 03 04 05 06 07 08 09
-------------------------------------------------------------------------
09:00 AM -> 05 10 08
09:30 AM -> 03 09 09
10:00 AM -> 09 11 12
The main requirement is that the days should be in the column header from 01 to the last date (the reason why I extracted only the day part from the date). The Timeslots should be down as rows.
From my experience with Excel, the idea of Transpose comes to my mind to solve this, but I am not sure.
Please help me in solving this problem.
Thank you.
Lalit Kumar Barik
You will have to generate the dataset accordingly. I am guessing you are doing some kind of grouping based on the hour so generate a column for each hour of the day and populate the dataset accordingly.
In SQL Server, there is a PIVOT function that may be of use.
The MSDN article specifies usage and gives an example.
The example is as follows
Table DailyIncome looks like
VendorId IncomeDay IncomeAmount
---------- ---------- ------------
SPIKE FRI 100
SPIKE MON 300
FREDS SUN 400
SPIKE WED 500
...
To show
VendorId MON TUE WED THU FRI SAT SUN
---------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
FREDS 500 350 500 800 900 500 400
JOHNS 300 600 900 800 300 800 600
SPIKE 600 150 500 300 200 100 400
Use this select
SELECT * FROM DailyIncome
PIVOT( AVG( IncomeAmount )
FOR IncomeDay IN
([MON],[TUE],[WED],[THU],[FRI],[SAT],[SUN])) AS AvgIncomePerDay
Alternatively, you could select all of the data from DailyIncome and build a DataTable with the data pivoted. Here is an example.

Resources