I have a very simple app to learn on, just displaying parts of an imported spreadsheet, but I'm not sure how to make it update automatically and repull the data. I'm not adding any additional information inside appmaker, so clearing and repulling is fine. I just want it to update the sheet either every day or when the app is opened, either would be fine.
I was able to get the sheets data into my model in the first place with https://developers.google.com/appmaker/models/import-export but i'm not sure how to write a script so that it auto updates.
I think, if I'm not mistaken, that you want to run a process every certain time to import data into Google App Maker from a Google Spreadsheet. If that is the case, you can use a time trigger. An example on how to manage time triggers is available here.
The trigger must invoke a function that reads data from the spreadsheet using the SpreadsheetApp service and then save all the records in bulk. Here is an example of how you can update several records in bulk.
Related
For a project I need to apply some functions to data that is added to a Google Sheets or Google BigQuery table using Pub/Sub.
I want to pass the newly added table rows to listeners that are subscribed to the Pub/Sub topic. Essentially, the table contains some links with images from external websites and I want to automatically download them, store them in our google cloud storage bucket and add a link to the new location of the image to the original table. This is supposed to happen immediately after the data is received.
I cannot figure out how to publish a message that contains the new data to my PubSub topic once data is appended to my tables.
Does anyone know if what I am trying to achieve is even possible?
We use Firebase/Google analytics in our android app. Each event is saved with a lot of extra information (user-id, device-info, timestamps, user-properties, geographical location …). The extra info is there by default but we don’t want it to be collected.
We tried 2 things:
1) Update Big Query Schema
Delete the unwanted columns from Big Query. Unfortunately, Big Query creates a new export every day. So we would need to know where those fields are coming from. Something we don't know.
2) DefaultParameters within the app
Tried to use default parameters from inside the app, so the city will always be null. Here is an example with the user’s city
Bundle defaultValues = new Bundle();
defaultValues.putString("geo.city", null);
FirebaseAnalytics.getInstance(ctx).setDefaultEventParameters(defaultValues);
Unfortunately, we still see geo.city in our BigQuery data filled in.
Is there a way of changing what is collected by default?
There is no way to disable the geography information. Analytics uses IP addresses to derive the geolocation of a visitor. Probably the solution about update Big Query Schema is a viable way. You have to create a system that carries out this update on a daily basis precisely because the export takes place every day.
Background: I am using Firestore as the main database for my (web) application. I also pre-render the data stored in there, which basically means that I collect all data needed for specific requests so I can later fetch them in a single read access, and I store that pre-rendered data in a separate Firestore collection.
When a user changes some data, I want to know when this background rendering is finished, so I can then show updated data. Until rendering is finished, I want to show a loading indicator ("spinner") so the user knows that what he is currently looking at is outdated data.
Until now, I planned to have the application write the changed data into the database and use a cloud funtion to propagate the changed data to the collection of pre-rendered data. This poses a problem because the writing application only knows when the original write access is finished, but not when the re-rendering is finished, so it doesn't know when to update its views. I can hook into the table of rendered views to get an update when the rendering finished, but that callback won't be notified if nothing visibly changes, so I still do not know when to remove the spinner.
My second idea was to have the renderer function publish to a pubsub topic, but this fails because if the user's requests happens to leave the original data unchanged, the onUpdate/renderer is not called, so nothing gets published on the pubsub and again the client does not know when to remove the spinner.
In both cases, I could theoretically first fetch the data and look if something changed, but I feel that this too easily introduces subtle bugs.
My final idea was to disallow direct writes to the database and have all write actions be performed through cloud functions instead, that is, more like a classical backend. These functions could then run the renderer and only send a response (or publish to a pubsub) when the renderer is finished. But this has two new problems: First, these functions have full write access to the whole database and I'm back to checking the user's permissions manually like in a classical backend, not being able to make use of Firestore's rules for permissions. Second, in this approach the renderer won't get before/after snapshots automatically like it would get for onUpdate, so I'm back to fetching each record before updating so the renderer knows what changed and won't re-render huge parts of the database that were not actually affected at all.
Ideally, what (I think) I need is either
(1) a way to know when a write access to the database has finished including the onUpdate trigger, or
(2) a way to have onUpdate called for a write access that didn't actually change the database (all updated fields were updated to the values they already contained).
Is there any way to do this in Firestore / cloud functions?
You could increment a counter in the rendered documents, in such a way a field always changes even if there is no change for the "meaningful" fields
For that, the best is to use FieldValue.increment.
I've created a shiny app that creates a graph based on data that's entered in daily via a shared excel sheet (.xlsx) that is in a shared folder (an L drive).
How would I format or upload the data so that it is able to be refreshed whenever a new daily line of data is entered?
Here is one possible approach along with reference documentations:
Create a workflow to fetch the data using its URL:
read in online xlsx - sheet in R
Make the data retrieval process reactive:
https://shiny.rstudio.com/articles/reactivity-overview.html
Set a reactiveTimer to periodically check for updates:
https://shiny.rstudio.com/reference/shiny/1.0.0/reactiveTimer.html
By doing so, your app will fetch the document on a regular basis to update your graph. If you want real time updates (i.e. every time there is a change in the document), you have to be able to trigger the application from outside, which is more complicated (especially via Excel).
Update:
Following up your comment; you don't need the data to be online. You are fine if you are able to import it into R. Just make this process reactive and set a timer to refresh everyday (see the documentation for examples). Alternatively you can have an actionButton to refresh manually.
I am building an Ionic application with Firebase Database support. One of my feature of application is polling/voting. I want to refresh my object(i.e make empty to the data) after every 24 hours. I want to achieve this without writing any code to the client side, is it possible that firebase can maintain this routine by it self? Here is the snap, I want to refresh polls object after every 24 hours
After a little search I found Zapier, it can be very easy to operate what you need.
However, I think you should keep your data in the db and change your query to pull data by date.
Also you can use cron-job in any serverside program, to clean the firebase data.