Where do I upload p12 file received from Google Analytics to Pentaho Server? - google-analytics

I am using Pentaho community version and using PDI on my local machine. I am trying to integrate PDI with Google Analytics and I am able to do so with my local machine. But when I put the same transformation file on to the server (PUC upload), It is unable to access the .p12 credential file generated by Google Analytics. The only way this seems possible if I upload the credential file on some server location which is accessible to my Pentaho Server.
How to solve this? Where should I put the credential file on the server for this to work? Is this functionality even available in Pentaho Community Version?

You will have to keep the file in the server for sure, use a variable to store the key file path and use the variable in that place, this variable should be from kettle properties. Set your variable according to your environment needs. The key is variable pointing to the variable path which is obtained from kettle properties.

Related

How to Create dashboard for structured log files in disk in any tool like Kibana, Grafana

I have a .net web api application where I have used Serilog and was able to generate both structured logs with custom fields. All log files are stored in a folder in the server. Can I install Kibana/Grafana in the same server to create dashboards using info in the structured log files. Their(Kibana/Grafana) website refer to data sources like Elastisearch or some other but not directly structured logs.
Both Kibana and Grafana require some kind of database to connect to. AFAIK, so does Apache Superset.
If you don't want to provision and manage a database, you could just write something to read the files directly and render charts.

Firestore Run Functions Locally with Admin

I'm trying to run my Cloud Functions locally using the below guide
https://firebase.google.com/docs/functions/local-emulator
I'd like to be able to use the Admin SDK in my local functions. I've downloaded JSON admin keys at the Service Accounts Pane of the Google Cloud Console and it says to add it using
export GOOGLE_APPLICATION_CREDENTIALS="path/to/key.json"
I generated keys using
the PROJECTNAME#appspot.gserviceaccount.com that has
App Engine default service account credentials
NOT
firebase-adminsdk-CODE#PROJECTNAME.iam.gserviceaccount.com with firebase-adminsdk credentials
What I tried
I tried to save it down to a separate folder, and I provided the path as relative to root. And I executed this command in terminal while in my functions folder. It didn't give me any response. Just went to the next line in Terminal.
export GOOGLE_APPLICATION_CREDENTIALS="/Users/[user]/Documents/[PROJECT]/Service_Account/file_name.json"
Questions:
Did I download/use the right JSON credentials?
Is there a certain place I need to save that .json file? Or can it be anywhere n my system?
Does that path need to be from root? Or relative to my functions folder?
Where do I need to execute this command?
Should it provide some sort of response that it worked? How do we know if it does?

How to use UTL_file to store file at client

I want to use plsql installed at a windows client to retrive some pdf file saved as blob at server. I found a tutorial about UTL_FILE but looks like it can only create file at server side, so is it possible to create file at client or is there a way to transfer files from server to client? Can someone give me some suggestion? Thx.
UTL_File has a parameter named "LOCATION". This is where your files will be written and is called a DIRECTORY. You should be able to create your own DIRECTORY and point it to a location that can be reached by your Oracle instance.
CREATE OR REPLACE DIRECTORY PDF_Out AS 'C:\Users\Me\PDF_Out';
Then replace what you are currently using as the value for "LOCATION" with the name of your new DIRECTORY; in the sample it is called PDF_Out.
You may need to check running services to find out which user is running the Oracle listener and grant that user appropriate read/write privileges to the location defined by your new DIRECTORY.

Accessing files from Google cloud storage in RStudio

I have been trying to create connection between the Google cloud storage and RStudio server(The one I spinned up in Google cloud), so that I can access the files in R to run sum analysis on.
I have found three different ways to do it on the web, but I don't see many clarity around these ways so far.
Access the file by using the public URL specific to the file [This is not an option for me]
Mount the Google cloud storage as a disc in RStudio server and access it like any other files in the server [ I saw someone post about this method but could not find on any guides or materials that shows how it's done]
Using the googleCloudStorageR package to get full access to the Cloud Storage bucket.
The step 3 looks like the pretty standard way to do it. But I get following error when I try to hit the gcs_auth() command
Error in gar_auto_auth(required_scopes, new_user = new_user, no_auto =
no_auto, : Cannot authenticate -
options(googleAuthR.scopes.selected) needs to be set to
includehttps://www.googleapis.com/auth/devstorage.full_control or
https://www.googleapis.com/auth/devstorage.read_write or
https://www.googleapis.com/auth/cloud-platform
The guide on how to connect using this is found on
https://github.com/cloudyr/googleCloudStorageR
but it says it requires a service-auth.json file to set the environment variables and all other keys and secret keys, but do not really specify on what these really are.
If someone could help me know how this is actually setup, or point me to a nice guide on setting the environment up, I would be very much grateful.
Thank you.
Before using any services by google cloud you have to attach your card.
So, I am assuming that you have created the account, after creating the account go to Console ,if you have not created Project then Create Project, then click on sidebar find APIs & Services > Credentials.
Then,
1)Create Service Account Keys save this File in json you can only download it once.
2)OAuth 2.0 client ID give the name of the app and select type as web application and download the json file.
Now For Storage go to Sidebar Find Storage and click on it.
Create Bucket and give the name of Bucket.
I have added the single image in bucket, you can also add for the code purpose.
lets look how to download this image from storage for other things you can follow the link that you have given.
First create environment file as .Renviron so it automatically catches the json file and save it in a working directory.
In .Renviron file add those two downloaded json files like this
GCS_AUTH_FILE="serviceaccount.json"
GAR_CLIENT_WEB_JSON="Oauthclient.json"
#R part
library(googleCloudStorageR)
library(googleAuthR)
gcs_auth() # for authentication
#set the scope
gar_set_client(scopes = c("https://www.googleapis.com/auth/devstorage.read_write",
"https://www.googleapis.com/auth/cloud-platform"))
gcs_get_bucket("you_bucket_name") #name of the bucket that you have created
gcs_global_bucket("you_bucket_name") #set it as global bucket
gcs_get_global_bucket() #check if your bucket is set as global,you should get your bucket name
objects <- gcs_list_objects() # data from the bucket as list
names(objects)
gcs_get_object(objects$name[[1]], saveToDisk = "abc.jpeg") #save the data
**Note :**if you dont get json file loaded restart the session using .rs.restartR()
and check the using
Sys.getenv("GCS_AUTH_FILE")
Sys.getenv("GAR_CLIENT_WEB_JSON")
#it should show the files
You probably want the FUSE adaptor - this will allow you to mount your GCS bucket as a directory on your Server.
Install gcsfuse on the R server.
create a mnt directory.
run gcsfuse your-bucket /path/to/mnt
Be aware though that RW performance isnt great vis FUSE
Full documentation
https://cloud.google.com/storage/docs/gcs-fuse

WSO2 API Manager - API-specific configuration files

We use a configuration management tool (Chef) for WSO2 API Manager installation (v2.1.0). For each installation, the WSO directory is deleted and overwritten with the new changes / patches.
This process removes already created APIs from the WSO2 API Publisher. (Since these are still present in the database, they cannot be re-created with the same name.) We have assumed that the entire API configuration is stored in the database which is obviously not the case.
This API-specific file is noticeable to us:
<wso2am>/repository/deployment/server/synapse-configs/default/api/admin--my-api-definition_vv1.xml
Are there any other such files that must not be deleted during a new installation or is there a way to create these files from the information stored in the database?
We have considered using the "API import / export tool" (https://docs.wso2.com/display/AM210/Migrating+the+APIs+to+a+Different+Environment). However, according to documentation, this also creates the database entries for the API, which in our case already exist.
You have to keep the content of the server folder (/repository/deployment/server). For this, you can use SVN based dep-sync. Once you enable dep-sync by giving an SVN server location, all the server specific data will be written to the SVN server.
When you are installing the newer pack, what you need to do is to point to the svn location and the database. (I hope you are using a production-ready database other than inbuilt h2)

Resources