How to run a windmill test script - automated-tests

I have recorded some interactions with windmill and when I hit save I get the following (python) script:
# Generated by the windmill services transformer
from windmill.authoring import WindmillTestClient
def test_recordingSuite0():
client = WindmillTestClient(__name__)
client.type(text=u'Hello World', id=u'lst-ib')
client.click(link=u'Hello world program - Wikipedia, the free encyclopedia')
client.waits.forPageLoad(timeout=u'20000')
Now I don't have any clue how to run this. At the end I need a script with which I can run 100 tests at the same time.
However, if I get one test running and it is easy to parallelize it with python. But right now I can't run this simple test.
I hope someone can help me :).

You simply run by pasting the output script from windmill into a file named as *.py and then you execute the following command
windmill chrome test=./[directory containing your *.py files] http://www.google.com
You can also specify directly the name of the test file.
If you want to run the script in parallel, then you can just execute those commands in separate terminals. This works so far only with chrome. Firefox is complaining if there more than one instance open.

Related

Different behavior when Rscript & rstan is run as a cron job

I try to run an R script at regular intervals to update a webpage. The script runs fine when called from the terminal like this:
/usr/local/bin/Rscript /Users/me/path/myscript.R
However, if I try running it as a cron job, I get an error. I add the job to crontab like this:
46 10 * * * /usr/local/bin/Rscript '/Users/me/path/myscript.R' >> '/Users/me/path/mylog.log' 2>&1
The script does run in R, but aborts due to an error. Specifically, I fit some models using rstan, and get an initialization error. (The error only applies to some models, while others still run fine.) The initialization values are valid by definition, but do not seem to be used properly. It is like rstan is doing math differently (and wrong) when it is run through cron.
The session info from R is identical whether I run the script in the terminal or as a cron job. My question is what else might still differ depending on how the script is run. Could rstan be using a different version of C++ when run as a cron job? Are there other paths I may need to set to get this to work correctly?
Update: The script also works if I run it using R CMD BATCH in terminal, but not if I use R CMD BATCH in a cron job. Using launchd triggers the same issue. I also tried using CmdStan through cmdstanr, and the same same thing happens: Runs fine until added to a cron job.
Edit 2: The models I thought ran fine in cron, were not actually fine. The results were wrong, until I used the fix explained below.
It looks like I finally managed to fix this, and I'm posting my solution here for anyone who encounters the same problem.
I ran env in terminal to see my current user environment. I copy-pasted the full output to the top of my crontab file. (Simply adding the PATH variable was not sufficient. I suppose it was SHELL or perhaps both PATH and SHELL that did the trick, but I haven't explored this further.)
To edit my user's crontab, I ran crontab -e, then pressed i to edit the file, pasted everything from env at the top of the file, stopped editing by pressing ctrl + c, and quit by typing :wq and hitting enter.

Automating R to run scripts

I'm basically looking for any way to automatically run R scripts just like it would run as if I was copy and pasting it into console. I've tried the package 'taskscheduleR' however it just seems to output to a log file in the directory which isn't as if I were to just run it inside the Rstudio application.
An example might be, say I want to get the last closing stock prices of 5 stocks each night, then the script in Rstudio and have the variables there and all of the code would be in the script file.
Any thoughts?
I would suggest the in-built Task Scheduler application if you using Windows.
Create a task that will run a batchscript file. This batchscript file has only 1 line which executes the Rscript you want. Set it to run each night (or whatever time you want).
I am not that well-versed in linux and MacOS but here's what I know:
Linux has cron. Add a job to crontab with your preferred timing and execute your script 'path/to/bin/r /path/to/script.r'
MacOS has Automator + iCal (for scheduling). It also has crontab like Linux.

How to see logs of running robot script?

The robot scripts when ran on RIDE, generate output.xml, report.html etc files, once run is over.
Is there any way available to view logs when script is still running? (When I use pause on failure)
Also some times I had to Stop/Abort the run in middle, and no logs are generated in such cases.
Kindly help,
Thanks in advance
As for first part — RIDE runs tests adding own listener, providing more verboseness of the output and pausing/resuming functionality.The easiest thing is to run tests not from RIDE, but from console using robot/pybot script. In this case much less logs are written to output (though it doesn't provide pause/resume functionality).
For second part — robot (RIDE starts robot script — you can see it in execution log: command: pybot.bat...) generates output.xml file not after but during execution, so generated output.xml is not valid until test is finished. After normal execution rebot tool generating log.html automatically. So generally it's possible to take following steps:
"Fix" your incomplete output.xml file after execution stop with fixml. output.xml location for RIDE execution can be found in the very same execution log of yours (e.g. ...\appdata\local\temp\RIDEv_0yrp.d\ in my case)
Run rebot stand-alone: rebot output.xml --log log.html --report report.html. Rebot options description you can check using rebot --help (as usual)
Please also note that directory where RIDE output files are stored is temporary — exists only when RIDE is started. You will lose your output on exiting RIDE
I'm using RIDE 1.5 so maybe my answer is not valid for other versions
In RIDE, under Run Tab , when you are running the scripts , you have a option, show message log, it will shows the runtime log.
Try this out.

How do I pass the environment of one R script to another?

I'm effectively trying to tack save.image() onto the end of a script without modifying that script.
I was hoping something like Rscript target_script.R | saveR.R destination_path would work, where saveR.R reads,
args.from.usr<-commandArgs(TRUE)
setwd(args.from.usr[1])
save.image(file=".RData")
But that clearly does not work. Any alternatives?
You can write an R script file that takes two parameters: 1, the script file you want to run, and 2, the file you want to save the image to.
# runAndSave.R ------
args.from.usr <- commandArgs(trailingOnly=TRUE)
source(args.from.usr[1])
setwd(args.from.usr[2])
save.image(file=".RData")
And then run it with
Rscript runAndSave.R target_script.R destination_path
You could try to program a task to be done within the OS of that computer. In Linux you will be using the terminal and there is this tool called CRON. In Windows you can use Task Scheduler. If you program the OS to open a terminal and load an script and later save the image, you maybe will get what you need, save the data generated from the script without actually modifying it.

batch process for R gui

I have created a batch file to launch R scripts in Rterm.exe. This works well for regular weekly tasks. The < PBWeeklyMeetingScriptV3.R > is the R script run by Rterm.
set R_TERM="C:\Program Files\R\R-2.14.0\bin\x64\Rterm.exe"
%R_TERM% --slave --no-restore --no-save --args 20120401 20110403 01-apr-12 03-apr-11 < PBWeeklyMeetingScriptV3.R > PBWeeklyMeetingScriptV3.batch 2> error.txt
I've tried to modify this to launch the R GUI instead of the background process as I'd like to inspect and potentially manipulate and inspect the data.
If I change my batch file to:
set R_TERM="C:\Program Files\R\R-2.14.0\bin\x64\Rgui.exe"
the batch file will launch the R GUI but doesn't start the script. Is there a way to launch the script too?
Alternatively is there a way to save/load the work space image to access the variables that are created in the script?
You can save and load workspaces by using save.image() and load(). I do this all the time when scripting to pass data sets between two separate script files, tied together using Python or bash. At the end of each R script, just add:
save.image("Your_image_name.RData")
The image will be the workspace that existed whenever the command was run (so, if it's the last command in the file, it's the workspace right before the exist of the file). We also use this at my job to create "snapshots" of input and output data, so we can reproduce the research later. (We use a simple naming convention to get the time of run, and then label the files with that).
Not sure about launching and then running the GUI with specific scripts in it; I don't think that's a feature you'll find in R, simply because the whole point of running a batch file is usually to avoid the GUI. But hopefully, you can just save the image to disk, and then look at it or pass it to other programs as needed. Hope that helps!

Resources