I have a backlog of tasks. Every task has createAt and isDone fields.
I would like to query all tasks which are not done except for the current date. For the current date, I would like to get every task independent of status.
where( isDone == false if date < currentDate)
is there any way in Firebase to get this or I have to query 2 times and merge results?
Related
I have an asp.net application deployed in azure. This generates plenty of logs, some of which are exceptions. I do have a query in Log Analytics Workspace that picks up exceptions from logs.
I would like to know what is the best and/or cheapest way to detect anomalies in the exception count over a time period.
For example, if the average number of exceptions for every hour is N (based on information collected over the past 1 month or so), and if average goes > N+20 at any time (checked every 1 hour or so), then I need to be notified.
N would be dynamically changing based on trend.
I would like to know what is the best and/or cheapest way to detect anomalies in the exception count over a time period.
Yes, we can achieve this by following steps:
Store the average value in a Stored Query Result in Azure.
Using stored query result
.set stored_query_result
These are some limitations to keep the result. Refer MSDOC for detailed information.
Note: The stored query result will be available only 24 hours.
Workaround Follows
Set the Stored query result
# here i am using Stored query result to store the average value of trace message count for 5 hours
.set stored_query_result average <|
traces
| summarize events = count() by bin(timestamp, 5h)
| summarize avg(events)
2. Once Query Result Set you can use the Stored Query Result value in another KQL Query (The stored value was available till 24 hours)
# Retrieve the stored Query Result
stored_query_result(<StoredQueryResultName>) |
Query follows as per your need
Schedule the alert.
I have a query to get all of a specific type of users:
MATCH (users:User {role: "USER", hasCompletedRegistration: true})
RETURN users
And there is a createdAt datetime property. What is the cleanest way to only return users WHERE the createdAt is within the last 24 hours of when the query is executed? The goal is for a nightly CRON job to check for all newly registered users to send a welcome message, and thus the specific timeframe.
You'll need to have the property indexed, and you'll need to leverage index-backed ordering, which will grab the ordered entries from the index instead of having to look at and filter all :User nodes.
So, for the index you first need:
CREATE INDEX ON :User(createdAt)
You should be using temporal types, so for this I'll assume a dateTime type is used.
Next for the WHERE clause, for it to leverage the index, you can't apply a function or use any accessors from the createdAt property, that needs to be left alone. We can calculate 24 hours or one day ago through using dateTime() to get the current dateTime instant, and subtraction of a duration, then we'll use an inequality to finish up.
Also, we recommend using singular instead of plural when you're not working with list types, so user instead of users, since it represents a single user per row.
MATCH (user:User {role: "USER", hasCompletedRegistration: true})
WHERE user.createdAt > dateTime() - duration({days:1})
RETURN user
If you view the EXPLAIN plan of the query, you want to see a NodeIndexSeekByRange being used. If you see any other index being used (say on the role or hasCompletedRegistration) then you can provide a planner hint to force it to use the index on createdAt:
MATCH (user:User {role: "USER", hasCompletedRegistration: true})
USING INDEX user:User(createdAt)
WHERE user.createdAt > dateTime() - duration({days:1})
RETURN user
You can use the function duration.inSeconds and get the hour part. Datetime() function gets the system datetime now.
MATCH (n:User )
WHERE duration.inSeconds(n.createdAt, datetime()).hours <= 24
RETURN n
I know the query below is not supported in DynamoDB since you must use an equality expression on the HASH key.
query({
TableName,
IndexName,
KeyConditionExpression: 'purchases >= :p',
ExpressionAttributeValues: { ':p': 6 }
});
How can I organize my data so I can efficiently make a query for all items purchased >= 6 times?
Right now I only have 3 columns, orderID (Primary Key), address, confirmations (GSI).
Would it be better to use a different type of database for this type of query?
You would probably want to use the DynamoDB streams feature to perform aggregation into another DynamoDB table. The streams feature will publish events for each change to your data, which you can then process with a Lambda function.
I'm assuming in your primary table you would be tracking each purchase, incrementing a counter. A simple example of the logic might be on each update, you check the purchases count for the item, and if it is >= 6, add the item ID to a list attribute itemIDs or similar in another DynamoDB table. Depending on how you want to query this statistic, you might create a new entry every day, hour, etc.
Bear in mind DynamoDB has a 400KB limit per attribute, so this may not be the best solution depending on how many items you would need to capture in the itemIDs attribute for a given time period.
You would also need to consider how you reset your purchases counter (this might be a scheduled batch job where you reset purchase count back to zero every x time period).
Alternatively you could capture the time period in your primary table and create a GSI that is partitioned based upon time period and has purchases as the sort key. This way you could efficiently query (rather than scan) based upon a given time period for all items that have purchase count of >= 6.
You dont need to reorganise your data, just use a scan instead of a query
scan({
TableName,
IndexName,
FilterExpression: 'purchases >= :p',
ExpressionAttributeValues: { ':p': 6 }
});
I have a model called "ticket" that has a start time and end time. I would like to be able to sort/divide tickets by time on the front-end. There will be 3 categories: past (now > end_time), current (start_time < now < end_time), future (now < start_time). Where now represents the real UTC time. I am thinking of having a "state" field in the ticket model which will contain the value "past", "current", or "future". I need a way to update the state of tickets based on time.
Would a cron job running every minute be an appropriate solution for this? It will iterate through all entries and do a check if the state should be updated and then perform the update if necessary. Is this solution scalable? Is there a better solution?
If it is necessary info I am thinking of using Firebase for the database and Google App Engine for the cron job.
In most cases it is the wrong approach to save data into a db which gets invalidated by time but can be calculated from the same record's data. There is also a risk of inconsistencies if the dates get updated but the cronjob didn't run yet or when dates are close to now.
I would suggest you to always calculate that info in your db queries by using the date fields. In MySQL this would work similar to this:
SELECT
(IF (end_time < NOW()) THEN 'past' ELSE IF (start_time < NOW()) THEN 'current' ELSE 'future') AS state
FROM table
Alternatively you can just fetch the start_time and end_time fields and handle the state in your application. If you want to query the entries by status, then you can also use the date columns in the filtering clauses.
This drops the need to have a cron job update the status.
There are several date/time fields in this view and I can never wrap my head around which column to order by (and which secondary column) in order to retrieve a list of SQL statements in the order in which they were executed on the server.
StartTime - The timestamp associated with when the query was submitted to Teradata for parsing.
FirstStepTime - The timestamp associated with when the first step of the query was executed.
FirstRespTime - The timestamp associated with when the first row was returned to the client.
The gap in time between the StartTime and FirstStep time include parsing time and any workload throttle delay that was enforced by Teradata's Dynamic Workload Manager. For the sake for keeping things simple here I will defer to an excellent article written by Teradata's Carrie Ballinger on dealing with delay time here.