Read/Write metrics from Datastore - google-cloud-datastore

Is there a way to get detail Datastore metrics? I am interested in reads and writes a sort of historical data. I would like visually to see whenever changes in the stack happens how Datastore utilized.

You can see the list of metrics provided for Datastore in the official documentation:
api/request_count
entity/read_sizes
entity/write_sizes
index/write_count
A straight forward way to observer them is to use Stackdriver Monitoring > Resources > Metrics Explorer > Find resource type and metric.

Related

Verification of export parallelization

When it comes to export, we have the following property options which affect concurrency of the export either to storage directly or to external table (documentation link):-
distribution
distributed
spread
concurrency
query_fanout_nodes_percent
Say, I tweak these options and increase/decrease concurrency based on shards or nodes, is there any Kusto command that will allow me to exactly see how many of these parallel threads of export (whether it's based on per_shard or per_node or some percent) are running? The command .show operation details doesn't show these details , it just shows how many separate export commands are issued by client and not the related parallelization details.
As it stands now, there is no additional information that the system will provide regarding the threads used in the export operation in the same way that this information is not available for queries.
Can you add to your question the benefit of having such information? Is it to track the progress of the command? In any case, if this is something that you feel is missing from the service please open a new item or vote for an existing item in the Azure Data Explorer user voice

Azure Synapse replicated to Cosmos DB?

We have a Azure data warehouse db2(Azure Synapse) that will need to be consumed by read only users around the world, and we would like to replicate the needed objects from the data warehouse potentially to a cosmos DB. Is this possible, and if so what are the available options? (transactional, merege, etc)
Synapse is mainly about getting your data to do analysis. I dont think it has a direct export option, the kind you have described above.
However, what you can do, is to use 'Azure Stream Analytics' and then you should be able to integrate/stream whatever you want to any destination you need, like an app or a database ands so on.
more details here - https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics
I think you can also pull the data into BI, and perhaps setup some kind of a automatic export from there.
more details here - https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-get-started-visualize-with-power-bi

How could Bosun fit for my usecase?

I need an alerting system where I could have my own metric and threshold to report for anomalies (basically alerting on the basis of logs and data in DB). I explored Bosun but not sure how to make it work. I have following issues:-
There are pre-defined items which are all system level, but I couldn't find a way to add new items, i.e. custom items
How will bosun ingest data other than scollector. As I understand could I use logstash as data source and totally miss OpenTDSP( Really don't like HBase dependency)?
By Items I think you mean metrics. Bosun learns about metrics, and their tag relationships when you do one of the following:
Relay opentsdb data through Bosun (http://bosun.org/api#sending-data)
Get copies of metrics sent to the api/index route http://bosun.org/api#apiindex
There are also metadata routes, which tell bosun about the metric, such as counter/gauge, unit, and description.
The logstash datasource will be deprecated in favor of an elastic datasource in the coming 0.5.0 release. But it is replaced by an elastic one is better (but requires ES 2+). To use those expressions see the raw documentation (bosun.org docs will updated next release): https://raw.githubusercontent.com/bosun-monitor/bosun/master/docs/expressions.md. To add it you would have something like the following in the config:
elasticHosts=http://ny-lselastic01.ds.stackexchange.com:9200,http://ny-lselastic02.ds.stackexchange.com:9200,http://ny-lselastic03.ds.stackexchange.com:9200
The functions to query various backends are only loaded into the expression library when the backend is configured.

Is there any way to input the result got from the curl via fluentd?

We are seeking the most simple way for sending alfresco's audit log to elasticsearch.
I think using the alfresco supplying query and getting audit log would be most simple way.(since audit log data is hardly watchable on db)
And this query processes the effect measure as json type then I'd like to download the query direct using fluentd and send to elasticsearch.
I roughly understood that it would ouput at elasticsearc but I wonder whether I can download 'curl commend' using query direct at fluentd.
Otherwise, if you have other simple idea to get alfresco's audit log then kindly let me know.
I am not sure weather I understood it fully or not but based on your last statement I am giving this answer.
To retrieve audit entries from alfresco repository you could directly use REST APIs of Alfresco which allows you to access them.

Get specific data from Multi-Channel Funnels Reporting API?

In my application, i get the data on traffic and conversions from my account using Google Analytics Core Reporting API and Multi-Channel-Funnels Reporting API.
For to get a traffic data, I use the GET command from Core Reporting API.
In this command I give, among other things, necessary to display parameters (dimensions and metrics), and filtering options data for my request - filter and segment (I use dynamic segments).
Here's an example of one of the queries:
GET https://www.googleapis.com/analytics/v3/data/ga?
ids=ga:XXXXXXXXXX &
start-date=2013-03-05 &
end-date=2013-04-04 &
metrics=ga:visits,ga:pageviewsPerVisit,ga:avgTimeOnSite,ga:percentNewVisits,ga:entranceBounceRate &
dimensions=ga:source,ga:keyword &
filters=ga:visits>0 &
segment=dynamic::ga:medium==CPA,ga:medium==CPC,ga:medium==cpm;ga:campaign!#img_ &
sort=-ga:visits &
key={YOUR_API_KEY}
This query returns me the results of the traffic data match the condition of filter and segment.
But when i wanted to return the data conversion for the same data with the MCF Reporting API, i encountered a problem.
GET command from MCF Reporting API does not contain the "Segment", and the filter does not allow write OR conditions.
Although the web interface Google Analytics has the ability to apply segments for data conversion. I've read. that we can apply Channel Groupings to the query results in the web interface, but they are tied to the account. And because I'm using a service account for authentication and working with API, to me they are not available. I do not know how to apply them in the API.
How do I filter the melon for the conversion in the request that they udoletvoryali Writing the above condition?
Is there a way to solve my problem?
Thanks, sorry for my English.

Resources