Script to guarantee app deploy using rsconnect:deployApp - r

I am able to deploy my shiny app with:
rsconnect::deployApp(appName = 'Test', launch.browser = FALSE, forceUpdate = T)
However, it does not always successfully deploy the app. I plan to have this run in a script as a Scheduled Task, and want to make sure the deployApp finishes successfully (if the process doesn't succeed, try again).
I imagine you could place this in a while loop, but I am not sure how to include script that would recognize if the function executed successfully or failed. Anyone have ideas?
Error Messages:
Preparing to deploy application...DONE
Error: $ operator is invalid for atomic vectors

As I say in the comment above, I really don't think this is a good idea. To do it safely and robustly will take a lot of work. And the error message you quote above looks pretty "uncontrolled" to me, so I suspect it's got more to do with a problem in your app than a temporary issue with the publishing process. In which case, you will be in an infinitely loop unless you take steps to prevent it. Have you investigated what your publish record and remote deployment log tell you?
That said, this would be my approach if I had to do it.
Create a flag, deploymentFlag, say, in the global environment and set it to FALSE.
Write a function, onDeploymentFailure() say, which sets deploymentFlag to FALSE.
Wrap you call to deployApp in a while loop like this
while(!deploymentFlag) {
deploymentFlag <- TRUE
rsconnect::deployApp(
...,
on.failure=onDeploymentFailure,
logLevel="verbose",
recordDir=<some dir>
)
if (!deploymentFlag) {
...interrogate the publish record to try to determine what went wrong,
and correct it if possible...
}
}
For safety, especially whilst developing and testing, I'd make sure that each attempt wrote a different publish log and I'd limit the maximum number of attempts to a very small number: 1 to start with, then 2 or 3 after I'd solved the initial problems, and so on.

Related

First function blocking a project opening

I've just read this post: .First function in R as I have previously used the first function when instructed for a short online course in which script was shared and run through with attendees. I don't fully understand much in the post or the use of the first function (I'm a relative beginner), but I think it is related to an error message every time I try to open up a project in R studio, and the project won't open and the session won't close. I can still open the project directly from the Rproject file, but would like to get rid of this issue.
The error in the console is :
Error in ll(class = c("matrix")) : could not find function "ll"
Has something been overwritten in Rstudio? Could someone advise as to how to get rid of this?
Many thanks.

Table methods not working anymore

I have a table with different methods, for example, one of them is validateWrite, when setting Field A to value X, Field B and C has to be filled in.
Suddenly (without changing code, I have compared the code with the test enviroment, it does work there) the validateWrite has stopped working.
I have tried to recompile the table, but that did not work.
Any idea why it suddenly (without making other modifications in this enviroment, or generating a CIL) stopped working and what i can try to solve it?
If some piece of code is calling table.doInsert(), it skips the validateWrite() method.
If the environments are truly identical, then I would try closing your AX client and deleting your user caches (see http://dynamics-ax-live.blogspot.com/2010/03/more-information-about-auc-file.html) where you delete all of the *.auc files located at C:\Users\[Username]\AppData\Local
In addition to what that tells you to delete, I'd also remove the *.kti file and all of the files & folders inside of C:\Users\[UserName]\AppData\Local\Microsoft\Dynamics Ax
Then open AX, see if the problem still exists. Then full system compile, CIL build, and delete your usage data.
The preferred route though would be to just drop a breakpoint in and debug the code to see what the execution stack is.

How to write (or access?) log of errors and warnings in R

Can anyone tell me how to keep a simple log of errors and warnings thrown during sourcing of code in R? Specifically I want to write them to an object during sourcing so I can print them into a message after execution in case the user missed it as the lines scrolled by.
Background: I wrote a small data validation program, about 600 lines, that allows the user to select an excel file which will be automatically imported, processed, and exported again. I tried hard to include automatic checks in the code (such as checking for required column names and throwing a pop up box if not present), but am aware that I can't think of every possible error that could occur, especially with other users. What I would like is to have every error/warning/etc. that occurs during sourcing to be written to an object that I could then call later such as in a pop up box. I have been able to figure out every step except for creating the error log.
ErrorLog <- (code for collecting errors/warnings here)
(Program Code - already completed)
ErrorPopUp <- tkmessageBox(title = "ERRORS", message = paste("
Please note the following errors/warnings occurred during file processing:
",
ErrorLog ,
"To proceed and export file please press OK. To exit the program without
saving, please press cancel."), icon = "warning", type = "okcancel")
(code to continue and export or quit)
I appreciate any ideas, and thx for the patience as I saw many posts on advanced topics such as creating your own error messages or sending them to windows, but none for simply writing an object listing them.

grunt updating options in one task so subsequent tasks can use them

I need to run grunt-bump which bumps the version number in the package.json, then run grunt-xmlpoke and update a config file with new version number.
So I have tried a couple of things. Inside the grunt.initConfig I run bump, then I run xmlpoke.
1) xmlpoke takes grunt.file.readJSON('package.json').version
or
2) after bump I run a custom task that adds the new version to a grunt option and xmlpoke takes a value of grunt.options("versionNumber")
In both of these versions the xml result is the pre-bump version. So xmlpoke is getting it's values before the tasks are run and the uses them when it's task is called. But I need it to take the value that is the result of a previous task.
Is there anyway to do this?
Ok, I have figured out the, somewhat obvious, solution.
Using grunt-bump you can update the package.config, you can also update the package.config that is often read into the variable pkg at the beginning of the initConfig. so in the setup of the bump task you specify
{
updateConfigs:['pkg']
}
Then in the xmlpoke I can do
{ xpath:'myxpath', value:'blablabla/<%=pkg.version%>'}
and this works. What I was doing before was
{ xpath:'myxpath', value:'blablabla/' + grunt.options.versionNumber}
where I had set the versionnumber in a previous task after the bump. Or
{ xpath:'myxpath', value:'blablabla/'+ grunt.file.readJSON('package.json').version}
neither of those worked. I guess I was just getting to smart for my own good as the <%= %> is the more common and typical way of accessing parameters from within the initConfig.
Anyway, there you have it. Or I have it.

How to get and set the default output directory in Robot Framework(Ride) in Run time

I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.

Resources