How to extract the output to CSV using terraform? - terraform-provider-aws

I have created multiple instances using terraform I need to export that output to CSV
Can you please help me on this.

I'm not sure exactly what information you want to export, but, the information for all resources created by terraform saved/stored under the tfstate file.
You can open the file and check it/extract directly.
You can run the following command to list all resources that were created
terraform state list
And you can run the following command to get the details of a resrouce.
terraform state show <resource name from list>

Related

Azure Databricks: How do we access R Scripts present on DBFS?

I'm new to DataBricks. I am trying to access a .R file that is present in the DBFS storage but I cannot figure out how to do so. Any help is really appreciated.
I can read data from the storage using the file path /dbfs and also source code from the script but I want to make edits to the script.
You need some editor to do that - for example, you can setup RStudio on your cluster and connect to it via RStudio UI - in this case you can edit R files directly on DBFS.
But really, the simplest for you would be to use Databricks CLI fs command to copy the file to your local machine, make changes in the editor of your choice, and upload file back.

Package log4r-mongoDB integration

I explored the log4r package and found that we have console and file appenders. I am looking for pushing the logs to a mongoDB database. I am unable to find any way in pushing the logs to mongoDB, do we have any appender for the same.
TIA
You can try the mongoimport tool
If you do not specify a file (option --file=), mongoimport reads data from standard input (e.g. "stdin").
Or write logs into a temporary file and use mongoimport to import it.

Accessing files from Google cloud storage in RStudio

I have been trying to create connection between the Google cloud storage and RStudio server(The one I spinned up in Google cloud), so that I can access the files in R to run sum analysis on.
I have found three different ways to do it on the web, but I don't see many clarity around these ways so far.
Access the file by using the public URL specific to the file [This is not an option for me]
Mount the Google cloud storage as a disc in RStudio server and access it like any other files in the server [ I saw someone post about this method but could not find on any guides or materials that shows how it's done]
Using the googleCloudStorageR package to get full access to the Cloud Storage bucket.
The step 3 looks like the pretty standard way to do it. But I get following error when I try to hit the gcs_auth() command
Error in gar_auto_auth(required_scopes, new_user = new_user, no_auto =
no_auto, : Cannot authenticate -
options(googleAuthR.scopes.selected) needs to be set to
includehttps://www.googleapis.com/auth/devstorage.full_control or
https://www.googleapis.com/auth/devstorage.read_write or
https://www.googleapis.com/auth/cloud-platform
The guide on how to connect using this is found on
https://github.com/cloudyr/googleCloudStorageR
but it says it requires a service-auth.json file to set the environment variables and all other keys and secret keys, but do not really specify on what these really are.
If someone could help me know how this is actually setup, or point me to a nice guide on setting the environment up, I would be very much grateful.
Thank you.
Before using any services by google cloud you have to attach your card.
So, I am assuming that you have created the account, after creating the account go to Console ,if you have not created Project then Create Project, then click on sidebar find APIs & Services > Credentials.
Then,
1)Create Service Account Keys save this File in json you can only download it once.
2)OAuth 2.0 client ID give the name of the app and select type as web application and download the json file.
Now For Storage go to Sidebar Find Storage and click on it.
Create Bucket and give the name of Bucket.
I have added the single image in bucket, you can also add for the code purpose.
lets look how to download this image from storage for other things you can follow the link that you have given.
First create environment file as .Renviron so it automatically catches the json file and save it in a working directory.
In .Renviron file add those two downloaded json files like this
GCS_AUTH_FILE="serviceaccount.json"
GAR_CLIENT_WEB_JSON="Oauthclient.json"
#R part
library(googleCloudStorageR)
library(googleAuthR)
gcs_auth() # for authentication
#set the scope
gar_set_client(scopes = c("https://www.googleapis.com/auth/devstorage.read_write",
"https://www.googleapis.com/auth/cloud-platform"))
gcs_get_bucket("you_bucket_name") #name of the bucket that you have created
gcs_global_bucket("you_bucket_name") #set it as global bucket
gcs_get_global_bucket() #check if your bucket is set as global,you should get your bucket name
objects <- gcs_list_objects() # data from the bucket as list
names(objects)
gcs_get_object(objects$name[[1]], saveToDisk = "abc.jpeg") #save the data
**Note :**if you dont get json file loaded restart the session using .rs.restartR()
and check the using
Sys.getenv("GCS_AUTH_FILE")
Sys.getenv("GAR_CLIENT_WEB_JSON")
#it should show the files
You probably want the FUSE adaptor - this will allow you to mount your GCS bucket as a directory on your Server.
Install gcsfuse on the R server.
create a mnt directory.
run gcsfuse your-bucket /path/to/mnt
Be aware though that RW performance isnt great vis FUSE
Full documentation
https://cloud.google.com/storage/docs/gcs-fuse

cloudera navigator insert overwrite directory

I am trying to insert file from table to one of the hdfs directory. It does transfer successfully and create the file. but, i am unable to see the data lineage on the cloudera navigator.
I am posting my command below can anyone help what could be the issue, is with the syntax or we have to configure some setting into the navigator.
Syntax:
insert overwrite directory '/user/demo/test_demo' select age, id from demo1;
Also, on the web portal of known issues they mentioned that it should show.
Link: https://www.cloudera.com/documentation/enterprise/release-notes/topics/cn_rn_known_issues.html#hive_hue
As the above query comes under operation execution. when you copy the above query in navigator search bar and from filters select operation execution you would be able to see the data lineage.
If you search for the result based on the out put directory it won't show anything as it a hive operation execution !

how to add a pushkey when manually entering data via the firebase console?

I would like to create data in a new firebase database and I need to include pushkeys. I know it can import data via a JSON file I do not know how to create a pushkey in a JSON file.
I have tried using the firebase console data entry tool but could not figure out how to do it. Googling the problem did not supply and results.
If anyone can help I would prefer to use the firebase data entry tool.
you can write any key just be sure that your key is not duplicated and upload your JSON file, there is no problem doing that

Resources