Integromat & Wild Apricot - make.com

I am absolutely new to Integromat and looking for some guidance. I am trying to download data files from members of the Wild Aprico App to a google drive.
I am able to connect to Wild Apricot but I have no idea how to create a scenario to do that.
Many thanks for any guidance!

Your best bet is to experiment with the Wild Apricot "Search Contacts" module to grab the contacts you want with any (optional) filters. Once you run that you will return a set of bundles that match the criteria you set in the module if any. Then you can model a Google Sheet module to add a row. Create the sheet ahead of time with headers you can use in the Google Sheet integration, similar to the columns you wish to "sync" from Wild Apricot.
Now if you want to copy them and erase them on the target every run is another part of the scenario you will want to model. Or do you want to update any fields/rows that have changed? If you want to do an "upsert" to update data in the sheet that will be more complex but save operations.
Most Wild Apricot databases are not that large. You will use as many operations as the number of contacts (ie bundles) you have in each scenario run - each new contact insert into Google Sheets will be one operation.
Here's a 2 module starting point:

Related

Tagging images used for search bar

I'm working with a classmate to build some kind of politicaly-related memes database where users will have the ability to tag images with hashtags, using Meteor. The purpose of this, beyond data collection, is to provide a powerful search engine, where one can find memes with keywords (let's say, for i.e., with the keywords "ukraine" and/or "poutine", you'll find memes related to theses topics) that matches the hashtags.
We have to build everything from scratch, and I'm wondering if someone here have an idea where to start. In other words :
What is the easiest way to host images with Meteor ? Is it through MangoDB ?
Is it possible to change the metadata of the images in the client side ? Do we need to grant this ability using javascript only (or is there also json in it) ?
If we can manage the two first parts, is there a way to link the metadata (the hashtags in that case) with the search engine in order to retrieve the images ?
Thank you for inputs !
It's not easiest but I would store images in Google Cloud Storage or Amazon S3
I would store image metadata in mongodb database. You can update the database from client side by calling Meteor Methods
When users search for images by entering keywords or link with keywords, you can query the database then return the related images.

Delete a Google Storage folder including all versions of objects inside

Hi and thanks in advance. I want to delete a folder from Google Cloud Storage, including all the versions of all the objects inside. That's easy when you use gsutil from your laptop (you can just use the folder name as prefix and put the flag to delete all versions/generations of each object)
..but I want it in a script that is triggered periodically (for example when I'm on holidays). My current ideas are Apps Script and Google Cloud Functions (or firebase functions). The problem is that in these cases I don't have an interface as powerful as gsutil, I have to use REST API, so I cannot say something like "delete everything with this prefix" and neither "all the versions of this object". Thus the best I can do is
a) List all the object given a prefix. So for prefix "myFolder" I receive:
myFolder/obj1 - generation 10
myFolder/obj1 - generation 15
myFolder/obj2 - generation 12
... and so on for hundreds of files and at least 1 generation/version per file.
b) For each file-generation delete it giving the complete object name plus its generation.
As you can see that seems a lot of work. Do you know a better alternative?
Listing the objects you want to delete and deleting them is the only way to achieve what you want.
The only alternative is to use Lifecycle which can delete objects for you automatically based on conditions, if the conditions satisfy your requirements.

Can I make this simple app in App maker using calculated models for demonstration purposes?

I am new to Google App maker and I don't have a lot of experience with coding either (sorry :/). Since App maker is marked as low-coding app builder tool, I assumed it was not that hard to make a very simple app with it. However, for me it is.
I need to make a simple app for demonstrations purposes only (so Cloud SQL and other complex database solutions are not in my interest here). I want to make it using calculated models (correct me if I am wrong, calculated models are just temporary solutions, since apps need to have like real databases to be fully functional?).
My app is basically made of 2 datas: 1) Employees and 2) Departments
-> Fields for "Employees" are: First name, Last name and Department.
-> Field for "Departments" is just Department name.
My app is supposed to look like this:
1st page: Table with current employees that has a button to add new employee,
2st page: Table with all department names (e.g. marketing, finance...) that has a button to add new department name,
3rd page: Form that opens when I click on add new employee button in which I can insert their first name, last name and from drop down menu choose department,
4th page: Form that opens when I click on add new department button in which I can insert new department name.
5th page: Form (or some other widget, not sure here) that has option to insert first and last name in order to find out what department that employee is assigned to.
I tried to make first 4 pages, but I end up with forms that I cannot insert anything into them. 5th page is still too much for me.
I hope you understand my struggles and if you know how to do it please share your knowledge. Thank you very much!
Calculated models are kind of like SQL views - they are not necessarily for temporary solutions. Every time you load a calculated model the script you write under that model's datasource is ran. That script usually loads data from an external source (I.e. grabbing stock prices from an API, loading data from an external SQL server, or generating random placeholder data).
You could use the cloud SQL models for this application that you are building - your table with all department names that is supposed to be displayed in the second page could just be a cloud SQL table with one single field for a department name.
I suggest you work through the example apps so you can get a better understanding of how the different components work. Here is a link to one for you to get started.
In short, you're going to create a few models to store information (I suggest using cloud SQL as the calculated models will require code whereas cloud SQL is more plug and play through app maker's bindings). Before you create any pages try to lay out how your databases will look as that will dictate how you set bindings or program your scripts.
Asking to completely make what is essentially a combination of the tutorials already provided by Google is pretty counter intuitive - you should ask more specific questions in regards to implementation.
As for App Maker being a low-code environment, that's only partially true. For very, very simple apps (think glorified forms) you will need only a couple lines of code and can probably do everything through drag-and-drop. However, anything more complicated than a simple form will almost certainly require a good chunk of actual code. There are plenty of resources online to learn Javascript.
You might want to try a google partner like AppSynergy for building stuff like this. It might be overkil for what you need (or maybe not if you intend to build a lot more stuff).

BigQuery setup from Google analytics

I would like some guidance to setup BigQuery data storage from Google Analytics.
We have 6 different websites which 4 of them belongs to a project and 2 of them to another, but we would like to analyse the data both separately for each site; the projects separately with the sites data; and all the sites together.
Hence, which is the best structure to setup in BigQuery?:
Two projects, with 4 and 2 datasets, or 1 main project with 2 datasets and 4 and 2 tables? or is that even possible.
Or is it so easy to extract the data that it doesn't matter, we can just put every site in an own project and extract the data as we want them.
Please give me some guidance in this issue
Kind regards
The short answer:
Or is it so easy to extract the data that it doesn't matter, we can just put every site in an own project and extract the data as we want them.
Yes!
The longer answer:
You can extract data from only one view per property (Set up a BigQuery Export), so start by identifying which one you'll link and ensure the settings are the same across all of the views you are going to import, assuming this is important to you.
Each profile/site will go into it's own dataset and will be partitioned by day, making it easy to query them individually, or together, as required.
It is possible to query across projects, so if you store data across two, you'll still be able to join them.
In my opinion it would make things easier for analysts if the data was all in one project, as you'll be able to save queries in a single location and track the query costs centrally, but if you need to keep 2 projects your data can still be connected.

Should I use Wordpress Transient API in this case?

I'm writing a simple Wordpress plugin for work and am wondering if using the Transients API is practical in this case, or if I should seek out another way.
The plugin's purpose is simple. I'm making a call to USZip Web Service (http://www.webservicex.net/uszip.asmx?op=GetInfoByZIP) to retrieve data. Our sales team is using a Lead Intake sheet that the plugin will run on.
I wanted to reduce the number of API calls, so I thought of setting a transient for each zip code as the key and store the incoming data (city and zip). If the corresponding data for a given zip code already exists, then no need to make an API call.
Here are my concerns:
1. After a quick search, I realized that the transient data is stored in the wp_options table and storing the data would balloon that table in no time. Would this cause a significance performance issue if the db becomes huge?
2. Is this horrible practice to create this many transient keys? It could easily becomes thousands in a few months time.
If using Transient is not the best way, could you please help point me in the right direction? Thanks!
P.S. I opted for the Transients API vs the Options API. I know zip codes don't change often, but they sometimes so. I set expiration time of 3 months.
A less-inflated solution would be:
Store a single option called uszip with a serialized array inside the option
Grab the entire array each time and simply check if the zip code exists
If it doesn't exist, grab the data and save the whole transient again
You should make sure you don't hit the upper bounds of a serialized array in this table (9,000 elements) considering 43,000 zip codes exist in the US. However, you will most likely have a very localized subset of zip codes.

Resources