Why is gmailr not working in docker build process? - r

I'm using the gmailr package for sending emails from a r script.
Locally it's all working fine, but when I try to run this during a docker build step in google cloud I'm getting an error.
I implemented it in the following way described here.
So basically, locally the part of my code for sending emails looks like this:
gm_auth_configure(path = "credentials.json")
gm_auth(email = TRUE, cache = "secret")
gm_send_message(buy_email)
Please note, that I renamed the .secret folder to secret, because I want to deploy my script with docker in gcloud and didn't want to get any unexpected errors due to the dot in the folder name.
This is the code, which I'm now trying to run in the cloud:
setwd("/home/rstudio/")
gm_auth_configure(path = "credentials.json")
options(
gargle_oauth_cache = "secret",
gargle_oauth_email = "email.which.was.used.to.get.secret#gmail.com"
)
gm_auth(email = "email.which.was.used.to.get.secret#gmail.com")
When running this code in a docker build process, I'm receiving the following error:
Error in gmailr_POST(c("messages", "send"), user_id, class = "gmail_message", :
Gmail API error: 403
Request had insufficient authentication scopes.
Calls: gm_send_message -> gmailr_POST -> gmailr_query
I can reproduce the error locally, when I do not check the
following box.
Therefore my first assumption is, that the secret folder is not beeing pushed correctly in the docker build process and that the authentication tries to authenticate again, but in a non interactive-session the box can't be checked and the error is thrown.
This is the part of the Dockerfile.txt, where I'm pushing the files and running the script:
#2 ADD FILES TO LOCAL
COPY . /home/rstudio/
WORKDIR /home/rstudio
#3 RUN R SCRIPT
CMD Rscript /home/rstudio/run_script.R
and this is the folder, which contains all files / folders beeing pushed to the cloud.
My second assumption is, that I have to somehow specify the scope to use google platform for my docker image, but unfortunately I'm no sure where to do that.
I'd really appreciate any help! Thanks in advance!

For anyone experiencing the same problem, I was finally able to find a solution.
The problem is that GCE auth is set by the "gargle" package, instead of using the "normal user OAuth flow".
To temporarily disable GCE auth, I'm using the following piece of code now:
library(gargle)
cred_funs_clear()
cred_funs_add(credentials_user_oauth2 = credentials_user_oauth2)
gm_auth_configure(path = "credentials.json")
options(
gargle_oauth_cache = "secret",
gargle_oauth_email = "sp500tr.cloud#gmail.com"
)
gm_auth(email = "email.which.was.used.for.credentials.com")
cred_funs_set_default()
For further references see also here.

Related

Swagger UI does not show up in JupyterLab

I have saved one R script with name MC_APIv1.1-Prod.R in Jupyterlab and when I run the following code in R Console of Jupyterlab :
plumber::plumb("MC_APIv1.1-Prod.R")$run(host = "0.0.0.0", port = 5763,swagger = TRUE)
It gives the following message:
Running plumber API at http://0.0.0.0:5763
Running swagger Docs at http://127.0.0.1:5763/__docs__/
But I can't see the swagger UI while kernel is still running. When I run this code in RStudio then I can see the swagger window and it shows the plot in the browser window but it is not working in my case.
Can anyone please explain what is happening here and how to resolve this issue. Any help would be appreciated.
Complete code for the mentioned R file is here.
Maybe check this direction:
https://stackoverflow.com/questions/1694144/can-two-applications-listen-to-the-same-port#:~:text=Yes.,whichever%20one%20they%20need%20to.
The message you're receiving might imply that the port cannot be occupied by both simultaneously. By showing you plumber and swagger being at 5763.
The docs endpoint is only open for access from the local machine as the url is 127.0.0.1.
The swagger argument is deprecated according to docs https://www.rplumber.io/reference/Plumber.html?q=swagger#method-run-
try with
plumber::plumb("MC_APIv1.1-Prod.R")$run(host = "0.0.0.0", port = 5763, docs="swagger")
or
plumber::plumb("MC_APIv1.1-Prod.R")$set_docs("swagger")$run(host = "0.0.0.0", port = 5763)

Docker container failed to start when deploying to Google Cloud Run

I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I am following along with this great tutorial: https://arbenkqiku.github.io/create-docker-image-with-r-and-deploy-as-cron-job-on-google-cloud
So far, I have:
1.- Made the R script
library(bigrquery)
library(tidyverse)
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
job = insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND",
values=Sys.time() %>% as_tibble(), billing=project)
2.- Made the Dockerfile
FROM rocker/tidyverse:latest
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial_forQuestion.R /home/rstudio
EXPOSE 8080
CMD ["Rscript", "/home/rstudio/big-query-tutorial.R", "--host", "0.0.0.0"]
3.- Successfully run the container locally on my machine, with the system time being written to BigQuery
4.- Pushed the container to my container registry in Google Cloud
When I try to deploy the container to Cloud Run (fully managed) using the Google Cloud Console I get this error: 
"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. Logs URL:https://console.cloud.google.com/logs/xxxxxx"
When I review the log, I find these noteworthy entries:
1.- A warning that says "Container called exit(0)"
2.- Two bugs that say "Container Sandbox: Unsupported syscall setsockopt(0x8,0x0,0xb,0x3e78729eedc4,0x4,0x8). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information."
When I check BigQuery, I find that the system time was written to the table, even though the container failed to deploy.
When I use the port specified in the tutorial (8787) Cloud Run throws an error about an "Invalid ENTRYPOINT".
What does this error mean? How can it be fixed? I'd greatly appreciate input as I'm totally stuck!
Thank you!
H.
The comment of John is the right source of the errors: You need to expose a webserver which listen on the $PORT and answer to HTTP/1 HTTP/2 protocols.
However, I have a solution. You can use Cloud Build for this. Simply define your step with your container name and the args if needed
Let me know if you need more guidance on this (strange) workaround.
Log information "Container Sandbox: Unsupported syscall setsockopt" from Google Cloud Run is documented as an issue for gVisor.

R beakr script as Rscript Windows 10 service

I'm trying to setup a simple beakr service in Windows that implements the example at https://github.com/MazamaScience/beakr. I'm able to run the script from the command line successfully and I've been able to add the service in Windows using NSSM, but I am unable to start the service.
When I dig into the service error logs I see that Rscript.exe cannot be executed due to a non-specific permissions problem.
My Rscript.exe is running out of C:\Program Files\R<Version>\bin and my beakr.R script is running out of my User home directory.
If anyone has had success implementing a similar service (Web page based REST endpoint) using R in Windows, I would love to know how you did it.
This is what I did to run an R script as Service using latest pre-release version of NSSM on Windows 10:
Create a directory to store the files
In this example : C:\R\ServiceTest
Create in this directory a never ending script : ServiceTest.R
library(beepr)
# Test script : beeps every 10 seconds
while (T) {
beepr::beep(1)
if (interactive()) {
# Shows spin cursor to facilitate test in interactive mode
for (i in 1:10) {
if (i%%4==0) {cursor <- '/'}
if (i%%4==1) {cursor <- '-'}
if (i%%4==2) {cursor <- '\\'}
if (i%%4==3) {cursor <- '|'}
cat('\r',cursor)
flush.console()
Sys.sleep(1)
}
} else {
Sys.sleep(10)
}
}
I used to let this kind of script run in a console open on my desktop to check various alarms regularly.
Create a batch file to run the script : ServiceTest.bat
Rscript ServiceTest.R
Open an Admin console and make sure the batch file runs correctly:
C:\R\ServiceTest>ServiceTest.bat
C:\R\ServiceTest>Rscript ServiceTest.R
|
Cancel the batch (Ctrl+C)
Using the Admin console, install the batch file as service using NSSM :
nssm install
Set Service name : ServiceTest
Set Application path : C:\R\ServiceTest\ServiceTest.bat
Set Working directory : C:\R\ServiceTest\
Set Logon : Windows User + Password
Install Service
Open Windows Services Manager, find ServiceTest and start it : if everything went well, that's it!
If you get an error message, check Windows Event Log / Services : you can find there hints on the cause of the problem. Most common problems I encountered :
error on path
used local user instead of own user account / password to run the service
If you want to remove the service :
nssm remove ServiceTest
This replaced very nicely the many R consoles I left running in the background on my Desktop.
I see no reasons it wouldn't work with a REST endpoint.

Shiny App Error: /v1/applications/ 400 - Validation Error Execution halted

Hi I'm having a million problems trying to publish my app to shiny.io.
Firstly, I have Rtools 3.2 installed in my computer and set to the Path, but it is not recognized in the registry. Nevermind, this code should fix it:
install.packages("installr")
library(installr)
install.Rtools(choose_version = FALSE, check = TRUE, use_GUI = TRUE,
page_with_download_url = "http://cran.r-project.org/bin/windows/Rtools/, keep_install_file=TRUE")
install.packages("devtools")
library(devtools)
devtools::install_github('rstudio/shinyapps')
Next, to deploy my app to my shiny.io account:
library(shinyapps)
shinyapps::setAccountInfo(name='xxxx', token='xxxxxxxxxx', secret='xxxxxxxx')
Then my app starts running in a browser, and I click publish to my shiny account. However, when the app is being deployed, it shows the following error:
Preparing to deploy application...Error: /v1/applications/ 400 - Validation Error
Execution halted
Any ideas what the problems may be? Thank you.
I had the same error returned. In my case the problem was the name of the app itself. Deployed apps must have names at least 4 characters long with no spaces.
Setting an application name solved this problem for me. My application directory contained a space.
deployApp(appName = "myapp")
I had the same problem, however, my app name was fine and even adding 'appName =' did not help. Just a side note that this issue came up because I changed the name of my folder in effort to change the name of my app in shinyapp.io
The only thing that worked for me is publishing through the "Publish" button of Rstudio on upper right. I would recommend publishing using that instead of command. You can select files you do not want to publish within the App folder and you can publish the app under a different name then your local name.
I also had similar errors and the issue was resolved after I changed the name of the directory that holds the "app.R" file from only 3 characters to more than 4 characters.

Installing an MSP using Powershell works on the local machine, fails remotely. Why?

I need some Powershell advice.
I need to install an application's MSP update file on multiple Win08r2 servers. If I run these commands locally, within the target machine's PS window, it does exactly what I want it to:
$command = 'msiexec.exe /p "c:\test\My Application Update 01.msp" REBOOTPROMPT=S /qb!'
invoke-wmimethod -path win32_process -name create -argumentlist $command
The file being executed is located on the target machine
If I remotely connect to the machine, and execute the two commands, it opens two x64 msiexec.exe process, and one msiexec.exe *32 process, and just sits there.
If I restart the server, it doesn't show that the update was installed, so I don't think it's a timing thing.
I've tried creating and remotely executing a PS1 file with the two lines, but that seems to do the same thing.
If anyone has advice on getting my MSP update installed remotely, I'd be all ears.
I think I've included all the information I have, but if something is missing, please ask questions, and I'll fill in any blanks.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
My process for this is:
Read a CSV for server name and Administrator password
Create a credential with the password
Create a new session using the machine name and credential
Create a temporary folder to hold my update MSP file
Call a PS1 file that downloads the update file to the target server
>>> Creates a new System.Net.WebClient object
>>> Uses that web client object to download from the source to the location on the target server
Call another PS1 file that applies the patch that was just downloaded –>> This is where I’m having issues.
>>> Set the variable shown above
>>> Execute the file specified in the variable
Close the session to the target server
Move to the next server in the CSV…
If I open a PS window and manually set the variable, then execute it (as shown above in the two lines of code), it works fine. If I create a PS1 file on the target server, containing the same two lines of code, then right click > ‘Run With PowerShell’ it works as expected / desired. If I remotely execute my code in PowerGUI, it returns a block of text that looks like this, then just sits there. RDP’d into the server, the installer never launches. My understanding of the “Return Value” value is that “0″ means the command was successful.
PSComputerName : xx.xx.xx.xx
RunspaceId : bf6f4a39-2338-4996-b75b-bjf5ef01ecaa
PSShowComputerName : True
__GENUS : 2
__CLASS : __PARAMETERS
__SUPERCLASS :
__DYNASTY : __PARAMETERS
__RELPATH :
__PROPERTY_COUNT : 2
__DERIVATION : {}
__SERVER :
__NAMESPACE :
__PATH :
ProcessId : 4808
ReturnValue : 0
I even added a line of code between the variable and the execution that creates a text file on the desktop, just to verify I was getting into my ‘executeFile’ file, and that text file does get created. It seems that it’s just not remotely executing my MSP.
Thank you in advance for your assistance!
Catt11.
Here's the strategy I used to embed an msp into a powershell script. It works perfectly for me.
$file = "z:\software\AcrobatUpdate.msp"
$silentArgs = "/passive"
$additionalInstallArgs = ""
Write-Debug "Running msiexec.exe /update $file $silentArgs"
$msiArgs = "/update `"$file`""
$msiArgs = "$msiArgs $silentArgs $additionalInstallArgs"
Start-Process -FilePath msiexec -ArgumentList $msiArgs -Wait
You probably don't need to use the variables if you don't want to, you could hardcode the values. I have this set up as a function to which I pass those arguments, but if this is more of a one-shot deal, it might be easier to hard-code the values.
Hope that helps!
using Start-Process for MSP package is not a good practice because some update package lockdown powershell libs and so you must use WMI call

Resources