Two epoch dates that are in the past by greater than one week:
1666324555879
1666170554391
I have determined their date with this tool: https://www.epochconverter.com/
Expectation: Items to delete within 48 hours of the epoch date, determined by the field with (TTL) in it.
Outcome: No items in any table deleting.
Records in the Console:
DynamoDB expects the TTL value to be in seconds, but yours are in milliseconds.
Your items will be deleted in about 50k years ;)
Related
I have an asp.net application deployed in azure. This generates plenty of logs, some of which are exceptions. I do have a query in Log Analytics Workspace that picks up exceptions from logs.
I would like to know what is the best and/or cheapest way to detect anomalies in the exception count over a time period.
For example, if the average number of exceptions for every hour is N (based on information collected over the past 1 month or so), and if average goes > N+20 at any time (checked every 1 hour or so), then I need to be notified.
N would be dynamically changing based on trend.
I would like to know what is the best and/or cheapest way to detect anomalies in the exception count over a time period.
Yes, we can achieve this by following steps:
Store the average value in a Stored Query Result in Azure.
Using stored query result
.set stored_query_result
These are some limitations to keep the result. Refer MSDOC for detailed information.
Note: The stored query result will be available only 24 hours.
Workaround Follows
Set the Stored query result
# here i am using Stored query result to store the average value of trace message count for 5 hours
.set stored_query_result average <|
traces
| summarize events = count() by bin(timestamp, 5h)
| summarize avg(events)
2. Once Query Result Set you can use the Stored Query Result value in another KQL Query (The stored value was available till 24 hours)
# Retrieve the stored Query Result
stored_query_result(<StoredQueryResultName>) |
Query follows as per your need
Schedule the alert.
I have a column that consists of the datetime for each entry in the table. The entries are across many days and across all times. I want to filter only for entries that happened between a certain time period in the day, like between 10am and 1pm. How does one do that?
The datetimes are stored as a string in ISO-8601 format. Ex: 2021-01-22T12:50:00-05:00
First, keep the sqlite Date and Time Functions doc handy.
Something like:
WHERE strftime("%H:%M",yourdate) between "10:00" and "13:00"
should accomplish the task.
I have my dynamo db table as follows:
HashKey(Date) ,RangeKey(timestamp)
DB stores the data of each day(hash key) and time stamp(range key).
Now I want to query data of last 7 days.
Can i do this in one query? or do i need to call dbb 7 times for each day? order of the data does not matter So, can some one suggest an efficient query to do that.
I think you have a few options here.
BatchGetItem - The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key. You could specify all 7 primary keys and fire off a single request.
7 calls to DynamoDB. Not ideal, but it'd get the job done.
Introduce a global secondary index that projects your data into the shape your application needs. For example, you could introduce an attribute that represents an entire week by using a truncated timestamp:
2021-02-08 (represents the week of 02/08/21T00:00:00 - 02/14/21T12:59:59)
2021-02-16 (represents the week of 02/15/21T00:00:00 - 02/22/21T12:59:59)
I call this a "truncated timestamp" because I am effectively ignoring the HH:MM:SS portion of the timestamp. When you create a new item in DDB, you could introduce a truncated timestamp that represents the week it was inserted. Therefore, all items inserted in the same week will show up in the same item collection in your GSI.
Depending on the volume of data you're dealing with, you might also consider separate tables to segregate ranges of data. AWS has an article describing this pattern.
I am trying to save a duration in Firestore. For example, a user has been working on an exercise for 4 minutes and 44 seconds. What is the recommended data type for this information?
It would be nice if the Firestore increment operator would work on the used data type. I thought about using a Timestamp, but in the Firebase SDK, the increment method expects a number, and a Timestamp is not a number.
What is the recommended way of saving durations in Firestore?
Because you are talking about duration, the solution would be to store it as a number, in particular as an Integer, which is a supported data-type. In your example, 4 minutes and 44 seconds will be stored as:
284
And this is because 4*60 + 44 = 284.
If you want to get back the number of hours and minutes, simply reverse the above algorithm.
It would be nice if the Firestore increment operator would work on the used data type.
Firestore already provides such an option. If you want to add, for example, an hour to an existing duration, you can use:
FieldValue.increment(3600) // 60 * 60
In code it will be as simply as:
val db = Firebase.firestore
db.collection(collName).document(docId).update("field", FieldValue.increment(3600))
If you want to decrement with an hour, pass a negative value:
FieldValue.increment(-3600)
And in code:
db.collection(collName).document(docId).update("field", FieldValue.increment(-3600))
I thought about using a Timestamp
A timestamp is not duration. Firestore timestamps contain time units as small as nanoseconds. So a Timestamp object contains, the year, month, day, hour, minutes, seconds, milliseconds, and nanoseconds. This will not solve your problem in any way.
I have a table which runs a DELETE query every few seconds. With that query I want to delete all entries that are older than 30 seconds.
At the moment I have a value for the creation time in my table creationdate integer which contains the date of creation in milliseconds since 1 January 1970.
In my DELETE query that I run every 30 seconds I compare it against the current time:
delete from table where creationdate < strftime('%s', 'now', '-30.0 seconds')
I referred to this article for the strftime formatting.
This query doesn't delete the entries, that's my problem. It works without the where statement so I guess that the function is the problem.
What am I missing out or is there even a better way to solve this problem?
edit: It doesn't work without the -30.0 seconds modifier either, so I think that I misunderstood how strftime works. Referred to this question as well.
I found my problem. I overlooked the fact that in SQLite strftime('%s') returns in SECONDS and not in MILLISECONDS.