I need to make a timechart of concurrent VPN users connected to my Cisco ASA like the one in the following screenshot:
look! here is "the perfect" timechart in splunk
Another timechart screenshot here: https://drive.google.com/file/d/1dW8nyG3dz3GbPiXuiXZofuhccoHpEHSP/view?usp=sharing
In splunk it was made possible by the awesome query posted here:
https://community.splunk.com/t5/Splunk-Search/Concurrent-Active-VPN-Sessions-on-a-Timechart/m-p/493141#M137524
If I have to use the same logic to achieve the desired result, I just need your help to convert the following part of the above splunk query into KQL:
| sort 0 _time
| eval time2=_time
| bin span=20m time2
| eval time2=if(status="disconnected",NULL,time2)
| eval _time=coalesce(time2,_time)
| streamstats count(eval(status="assigned")) as session by user
| stats values(eval(if(status="assigned",round(_time),NULL))) as start values(eval(if(status="disconnected",round(_time),NULL))) as end by user session
| eval timerange=mvrange(start,end,1200)
| mvexpand timerange
| rename timerange as _time
| timechart span=20m count(user)
Expected Output (from splunk) : https://drive.google.com/file/d/11F5p_zOGlgenIqVsToXiPlL2UplSIRNa/view?usp=sharing
Sample Data (from Sentinel, parsed) :
https://drive.google.com/file/d/1wzansi1MfCnUylNHSeUHiw8POIxzS4q_/view?usp=sharing
Yea, we had to switch from splunk to Azure Sentinel. (Don't ask why.)
Related
What is a solid DynamoDB access pattern for storing data from a bunch of receipts of identical format? I would use SQL for maximum flexibility on more advanced analytics, but as a learning exercise want to see how far one can go with DynamoDB here. For starters I'd like to query for aggregate overall and per product spending for a given time range, track product price history, sort receipts by total, stuff along those lines. But I also want it to be as flexible as possible for future queries I haven't thought of yet. Would something like this, plus some GSI's, work?
-----------------------------------------------------------------------------------------------------------
| pk | sk | unit $ | qty | total $ | receipt total | items
-----------------------------------------------------------------------------------------------------------
| "product a" | "2021-01-01T12:00:00Z" | 2 | 2 | 4 | |
| "product b" | "2021-01-01T12:00:00Z" | 2 | 3 | 6 | |
| "receipt" | "2021-01-01T12:00:00Z" | | | | 10 | array of above item data
| "product a" | "2021-01-02T12:00:00Z" | 1.75 | 3 | 5.25 | |
| "product c" | "2021-01-02T12:00:00Z" | 2 | 2 | 4 | |
| "receipt" | "2021-01-02T12:00:00Z" | | | | 9.25 | array of above item data
-----------------------------------------------------------------------------------------------------------
You have to decide your access patterns, and build the design of the dynamo off that not the other way around. No one outside your team/product can tell you what your access patterns are. That entirely depends on your products need.
You have to ask: What pieces of Information do you have, and what do you need to retrieve when you have those pieces of information? You then have decide what is the most common ones that will be done the most and craft your PK/SK combinations off that. If you can't fit all your queries into just one or two bits of information, you may want to set up an Index - but Index's should be maintained only for far less often accessed queries.
If you need to, its also Accepted Practice to enter the same information twice - in two documents in the table - as writes are easier/cheaper than multiple reads (a write is pretty much one WCU per document - any query/scan can be multiple RCUs even if you only need one part -- plus Index's being replications of the table mean there is a desync chance if you write/read too quickly or try to write/read the same document in parallel calls)
Take your time now to sit down and consider everything your app will need to query the dynamo for. The more you can figure out now, the better, and if you can set your PK to something that will almost always be available to the calling function trying to query then you will be in a much better state.
I am trying to find what's causing the higher RU usage on the Cosmos DB. I enabled the Log Analytics on the Doc DB and ran the below Kusto query to get the RU consumption by Collection Name.
AzureDiagnostics
| where TimeGenerated >= ago(24hr)
| where Category == "DataPlaneRequests"
| summarize ConsumedRUsPer15Minute = sum(todouble(requestCharge_s)) by collectionName_s, _ResourceId, bin(TimeGenerated, 15m)
| project TimeGenerated , ConsumedRUsPer15Minute , collectionName_s, _ResourceId
| render timechart
We have only one collection on the DocDb Account (prd-entities) which is represents Red line in the Chart. I am not able to figure out what the Blue line represents.
Is there a way to get more details about the empty collection name RU usage (i.e., Blue line)
I'm not sure but I think there's no empty collection costs RU actually.
Per my testing in my side, I found that when I execute your kusto query I can also get the 'empty collection', but when I watch the line details, I found all these rows are existing in my operation. What I mean here is that we shouldn't sum by collectionName_s especially you only have one collection in total, you may try to use requestResourceId_s instead.
When using requestResourceId_s, there're still some rows has no id, but they cost 0.
AzureDiagnostics
| where TimeGenerated >= ago(24hr)
| where Category == "DataPlaneRequests"
| summarize ConsumedRUsPer15Minute = sum(todouble(requestCharge_s)) by requestResourceId_s, bin(TimeGenerated, 15m)
| project TimeGenerated , ConsumedRUsPer15Minute , requestResourceId_s
| render timechart
Actually, you can check the requestCharge_s are coming from which operation, just watch details in Results, but not in Chart, and order by the collectionName_s, then you'll see those requests creating from the 'empty collection', judge if these requests existing in your collection.
I was planning to use a Dynamo table as a sort of replication log, so I have a table that looks like this:
+--------------+--------+--------+
| Sequence Num | Action | Thing |
+--------------+--------+--------+
| 0 | ADD | Thing1 |
| 1 | DEL | Thing1 |
| 2 | ADD | Thing2 |
+--------------+--------+--------+
Each of my processes keeps track of the last sequence number it read. Then on an interval it issues a Scan against the table with ExclusiveStartKey set to that sequence number. I assumed this would result in reading everything after that sequence, but instead I am seeing inconsistent results.
For example, given the table above, if I do a Scan(ExclusiveStartKey=1), I get zero results when I am expecting to see the 3rd row (seq=2).
I have a feeling it has to do with the internal hashing DynamoDB uses to partition the items and that I am misusing the ExclusiveStartKey option.
Is this the wrong tool for the job?
Alternatively, each process could issue a Query for seq+1 on each interval (looping if anything was found), which would result in the same ReadThroughput, but would require N API calls instead of N/1MB I would get with a Scan.
When you do a DynamoDB Scan operation, it does not seem to proceed sorted by the hash key. So using ExclusiveStartKey does not allow you to get an arbitrary page of keys.
For this example table with the Sequence ID, what I want can be accomplished with a Kinesis stream.
I have an Application Insights Azure Stream Analytics query that looks like this...
requests
| summarize count() by bin(duration, 1000)
| order by duration asc nulls last
...which gives me something like this, which shows the number of requests binned by duration in seconds, recorded in Application Insights.
| 0 | 1000 |
| 1000 | 500 |
| 2000 | 200 |
I would like to able to add another column which shows the count of exceptions from all requests in each bin.
I understand that extend is used to add additional columns, but to do so I would have to reference the 'outer' expression to get the bin constraints, which I don't know how to do. Is this the best way to do this? Or am I better off trying to join the two tables together and then doing the summarize?
Thanks
As you suspected - extend will not help you much here. You need is to run join kind=leftouter on the operation IDs (leftouter is needed so you won't drop requests that did not have any exceptions):
requests
| join kind=leftouter (
exceptions
| summarize exceptionsCount = count() by operation_Id
) on operation_Id
| summarize count(), sum(exceptionsCount) by bin(duration, 1000)
| order by duration asc nulls last
I'm having problems filtering the data with the element of polymer. My code is like this:
<firebase-collection location="--url--" order-by-child="user_id" equal-to="1" log="true" data="{{message}}"></firebase-collection>
My database is:
messages
|
|__0__content
| |__letter_id
| |__user_id
| |__datetime
|
|__1__content
| |__letter_id
| |__user_id
| |__datetime
|
|__etcetera
This should get the messages with a user_id that is equal to 1. However, this shows nothing. I guess this is a problem with my syntax, but I can't figure out the problem.
Never mind, I had my Firebase database setup with user_id set as a number. Therefore it could not obtain fetch the data.
I had the same issue but fixed it by using both the orderByValueType (as number) and the orderByChild set like so:
<firebase-collection
order-by-child="posted_date"
start-at="0"
order-value-type="number"
limit-to-first="1"
log="true"
location="https://somefirebaselocation.firebaseio.com/articles"
data="{{articles}}"></firebase-collection>
Without the order-value-type the query did not work.