I want to use google's Data Studio tool to make visualizations for the data I have in the Datastore. But it is not available as a source of data on the interface. How can I approach doing this?
You can write your own Community Connector to fetch data using the Cloud Datastore API. One thing to keep in mind here is that Data Studio only accepts tabular data so you will need to reshape your data using the connector code if necessary.
Your options are pretty limited. You'll probably have to move/convert your datastore entities into a database that can act as a data source for Data Studio. The following link will help you get started:
https://support.google.com/datastudio/topic/6370347?hl=en&ref_topic=7441382
You can export datastore data to BigQuery, which is available for data studio. There is more detail on this here:
https://cloud.google.com/bigquery/docs/loading-data-cloud-datastore
Open the BigQuery web UI in the Cloud Console.
Go to the BigQuery web UI
In the navigation panel, in the Resources section, expand your Google Cloud project and select a dataset. Click Create table. The process for loading data is the same as the process for creating an empty table.
On the Create table page, in the Source section:
For Create table from, select Cloud Storage.
In the source field, enter the Cloud Storage URI. The Cloud Storage bucket must be in the same location as the dataset that contains the table you're creating. The URI for your Datastore export file should end with [KIND_NAME].export_metadata or export[NUM].export_metadata. For example: default_namespace_kind_Book.export_metadata. In this example, Book is the kind name, and default_namespace_kind_Book is the file name generated by Datastore.
For File format, select Datastore Backup
On the Create table page, in the Destination section:
For Dataset name, choose the appropriate dataset.
In the Table name field, enter the name of the table you're creating in BigQuery.
Verify that Table type is set to Native table.
In the Schema section, no action is necessary. The schema is inferred from a Datastore export.
Select applicable items in the Advanced options section and then click Create table. For information on the available options, see Datastore options.
Related
Is it possible to filter the data to be exported to BigQuery? For example I just want to have fatal crashes (is_fatal=TRUE) to be exported but not non fatal exceptions which allocates much more space in my case.
I checked out data transfer options but could not find anything related to filtering or schema customization.
The only configuration options for the exporting Crashlytics data to BigQuery are to:
Turn it on or off
Enable streaming of intra-day events (if your project is on the Blaze plan)
It's not possible to control what crash data is exported beyond that.
If you want less data to be stored in BigQuery, you'll have to copy the data you want to keep over to new tables, and delete the ones generated by the integration.
tl;dr: I want to reference an external data source from a Kusto query in Application Insights.
My application is writing logs to Application Insights, and we're querying it using Kusto in the Azure portal. To give an example of what I'm trying to do:
We're currently looking at these logs to find an action that triggers when a visitor viewed a blog post on our site. This is working well on a per blog-post level, but now we want to group this data by the category these blog posts are in, or by the tags they have, but that's not information I have within the logs.
The information we log contains unique info about that blog post (unique url, our internal id, etc) that I could use to look up this information in another data source (e.g. our SQL DB where this relation is stored), but I have no idea if/how this is possible. So that's the question, is this possible? Can I query a SQL DB, or get data in JSON via a URL or something?
Alternative solutions would be to move the reporting elsewhere (e.g. PowerBI) and just use AI as a data source, or to actually log all the category/tag info, but I really don't want to go down that route.
Kusto supports accessing external data (blobs, Azure SQL, Cosmos DB), however
Application Insights / Azure Monitor and other multi-tenant services are blocking this functionality due to security and resource governance concerns.
You could try setting-up your own Azure Data Explorer (Kusto) cluster, where this functionality will be available, and then access your Application Insights data using cross-cluster query, or by exporting the data from Application Insights and hooking up EventGrid ingestion into your Kusto cluster.
Relevant links:
Kusto supporting external data:
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/schema-entities/externaltables
Querying data inside Application Insights:
https://learn.microsoft.com/en-us/azure/data-explorer/query-monitor-data
Continuous export data from Application Insights:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/export-telemetry
Data ingestion into Kusto from EventGrid:
https://learn.microsoft.com/en-us/azure/data-explorer/ingest-data-event-grid
As part of a 'pet project' Flutter app that I am trying to build (in Android Studio) I am looking to add a database of information (possibly with Firebase) for users to use with the app.
My current understanding/capabilities
At the moment, I understand how to (and have already) build a database in Cloud Firestore, where the users can store their own data. A good example of this would be a to-do list app, where the user can create a new item, which is stored in the database with their uid. This remains there until they delete it manually. They are also able to update the entry, such as change the name of the item, themselves in the app.
The aim
I've got a set of data at the moment, which is in Excel format, that has the potential to have up to 1000s of rows. I would like to be able to incorporate this into my app, such that the user is able to query the database, either via multiple dependent drop-down menus, or a search widget.
My question
Is there an easy way to convert a reasonably large set of data, currently in Excel format, into a firebase-type database (such as cloud firestore or realtime database) without having to manually enter all of the data?
For RTDB, you can use use some Excel-to-JSON tool and import that JSON into RTDB. However, I doubt that the exported format will be efficient to use in your app, so you might have to do some transformations (in your language of choice).
If your data is very large (1000s of rows, but... how many columns?), you might have to split your import into multiple smaller imports at different paths of your database.
Doing huge RTDB imports in Firebase console has caused my projects to "misbehave" for a little while, but it goes back to normal quickly, so don't freak out if that happens to you too.
For Firestore, which has no direct JSON import AFAIK, take a look at How to import CSV or JSON to firebase cloud firestore for some ideas.
I am researching of a way to regularly sync Firebase data to BigQuery, then display that data to Data Studio. I saw this instruction in the documentation:
https://support.google.com/firebase/answer/6318765?hl=en
According to the above instruction, it says once Firebase is linked to BigQuery, the data from Firebase is being streamed to BigQuery real-time.
Let's say I have initial export of Firebase data to BigQuery (before linking) and I made a Data Studio visualization out of that initial data, we call it Dataset A. Then I started linking Firebase to BigQuery. I want Dataset A to be in sync with Firebase every 3 hours.
Based on the documentation, does this mean I don't have to use some external program to synchronize Firebase data every 3 hours to BigQuery, since it is streaming real-time already? After linking, does the streamed data from Firebase automatically goes to Dataset A?
I am asking because I don't want to break the visualization if the streaming behaves differently than the expected (expected means that Firebase streams to BigQuery's Dataset A consistent with the original schema). Because if it does (break the original dataset or it doesn't stream to the original dataset), I might as well write a program that does the syncing.
Once you link your Firebase project to BigQuery, Firebase will continuously export the data to BigQuery, until you unlink the project. As the documentation says, the data is exported to daily tables, and a single fixed intraday table. There is no way for you to control the schedule of the data export beyond enabling/disabling it.
If you're talking about Analytics data, schema changes to the exported data are very rare. So far there's been a schema change once, and there are currently no plans to make any more schema changes. If a schema change ever were to happen again though, all collaborators on the project will be emailed well in advance of the change.
I'm trying to analyze an app user data now have access to the Firebase and Google BigQuery data. To do some analysis, I need to link these two database together by identify the users. In the database of Firebase, I have a field named user and I hope I could find a field in the BigQuery database which contains the same information to link these two database. But I only find a field named app instance id in BigQuery, which is not the same as user field in Firebase and I don't know how to join these two data source by a common field. Is anyone can help? Thanks!
Firebase does not automatically record User identification in its Analytics data. If you want that, you should set the relevant value yourself in by calling setUserId().