I have been using RDCOMClient for a while now to interact with vendor software. For the most part it has worked fine. Recently, however, I have the need to loop through many operations (several hundred). I am running into problems with the RDCOM.err file growing to a very large size (easily GBs). This file is put in C: with no apparent option to change that. Is there some way that I can suppress this output or specify another location for the file to go? I don't need any of the output in the file so suppressing it would be best.
EDIT: I tried to add to my script a file.remove but R has the file locked. The only way I can get the lock released is to restart R.
Thanks.
Setting the permissions to read only was going to be my suggested hack.
A slightly more elegant approach is to edit one line of the C code in the package in src/RUtils.h from
\#define errorLog(a,...) fprintf(getErrorFILE(), a, ##__VA_ARGS__); fflush(getErrorFILE());
to
\#define errorLog(a, ...) {}
However, I've pushed some simple updates to the package on github that add a writeErrors() function that one can use to toggle whether errors are written or not. So this allows this to be turned on and off dynamically.
So
library(RDCOMClient)
writeErrors(FALSE)
will turn off the error logging to the file.
I found a work around for this. I created the files C:\RDCOM.err and C:\RDCOM_server.err and marked them both as read-only. I am not sure if there is a better way to accomplish this, but for now I am running without logging.
Related
I'm working to define a Docker container which can be spun up in a cloud environment and run some reporting on our firm's database and spin itself down, with as little involvement from our data science team (including myself) as possible.
I'm pretty much done getting everything up and running, with one irritating exception- the reporting is done in R using some code that we've been using for a few years. I'm building on top of Rocker verse, and I'm adding the needs library.
The annoying thing (in this use case) about needs is that when it is first run, it asks the following:
>library('needs')
Should `needs` load itself when it's... needed? (this is recommended)
1: No
2: Yes
Selection:
In a typical interactive setting this is fine, I just type "Yes" and hit enter and I'm good to go. However, when I want the whole environment to build and run once a week on its own, I don't want to have to answer this question. I'd like it to assume Yes.
What I've tried so far includes each of these:
library('needs', quiet=TRUE)
library('needs', quietly=TRUE)
suppressMessages(library('needs', quietly=TRUE))
suppressWarnings(suppressMessages(library('needs', quietly = T)))
suppressPackageStartupMessages(library('needs', quietly=TRUE))
none of which solves the issue. The needs documentation provides for changing this setting later in a programmatic way, but not for defining the setting when first running needs:
Recommended use is to allow the function to autoload when prompted the
first time the package is loaded interactively. To change this setting
later, run needs:::autoload(TRUE) or needs:::autoload(FALSE) to turn
autoloading on or off, respectively.
I've also tried quietly installing needs, also to no avail. Unfortunately, I can't run bash commands in my Dockerfile to respond Yes, or at least I haven't found a way.
I'd like to avoid removing dependencies for needs, as it will involve a LOT of code refactoring.
Any ideas on how to solve this?
Thank you! :]
-Vince
Update
Solution is a bit hacky, but in my Dockerfile I'm doing a vim edit of the file which needs assigns to the sysfile variable:
sysfile <- system.file("extdata", "promptUser", package = "needs")
which for ME was /usr/local/lib/R/site-library/needs/extdata/promptUser, and changing its contents from "1" to "0" solving my problem.
A better solution would probably be to make it so it doesn't ask the question in the first place. You can view the code it runs on package load on github: https://github.com/cran/needs/blob/master/R/needs-package.R
If you set the option it checks for before hand then it doesn't need to ask the question in the first place:
options(needs.promptUser = FALSE)
Despite numerous searches, I can't seem to find a clear explanation as to what "Source on Save" means in RStudio.
I have tried ?source and the explanation there isn't clear, either.
As far as I can tell, it seems to run the script when I hit Save, but I don't understand the relevance/significance of it.
In simple terms, what exactly does Source on Save do and why would/should I use it?
This is kind of a shortcut to save and execute your code. You type something, save the script and it will be automatically sourced.
Very useful for short scripts but very annoying for time consuming longer scripts.
So sourcing is basically running each line of your file.
EDIT:
SO thinking of a scenario where this might be useful...
You developing a function which you will later put into a package... So you write this function already in an extra file but execute the function for testing in the command line...
Normally, you have to execute the whole function again, when you changed something. While using "Source on Save" the function will be executed and you can use Ctrl + 2 to jump into command line and test the function directly.
Since I am working with R, my datasets are much bigger. But I am remembering starting coding in python and vi, I updated my setting in a way to execute the code on save, since these little scripts where done in less then 10 seconds...
So maybe it is just not standard to work with small datasets... But I can still recommend it, for development, to use only 10% of a normal dataset. It will speed up the graphics creation and a lot of other things as well. Test it with the complete dataset every now and then.
I am attempting to automate the insertion of JPEG images into Powerpoint. I have a macro done for that already, except using R would be infinitely better for my purposes.
The package R2PPT should do this, I understand. However, I cannot use it. For example, when I try to use PPT.Open, I understand I can do it two different ways by calling method = "rcom" or method = "RDCOMClient". Using the latter, R will always crash, sending an error report to windows. Using the former, it tells me I need to install statconnDCOM , before giving the error:
Error in PPT.Open(x) : attempt to apply non-function.
I cannot install statconnDCOM freely, as I wouldn't call this work non-commercial. So if there isn't a way to get around this issue, are there at least some free alternatives to R2PPT so that I can save several hours of manual work with a simple R code? If there is a way for me to use R2PPT, that would be ideal.
Thanks!
Edit:
I'm using R version 2.15 and downloaded the most recent version of R2PPT. Powerpoint is 2007.
Do you have administrative privileges on this machine?
There is an issue with package RDCOMClient. It needs permissions to write file rdcom.err in the root of drive C:. If you don't have privileges to write to c:, there is a rather cumbersome workaround:
Close R
Create "c:\temp" folder if it doesn't exist.
Locate on your hard drive file rdcomclient.dll. It usually placed in \R\library\RDCOMClient\libs\i386\ and in \R\library\RDCOMClient\libs\x64\ (you need to patch file which corresponds your Windows version - 32 bit or 64 bit). It's recommended to make backup copy of this files before patching.
Open rdcomclient.dll in text editor (Notepad++, for example -http://notepad-plus-plus.org/)
Find in file string c:\rdcom.err - it occurs only once.
Go into overwrite mode (usually by pressing "Ins" key). It is very important that new path will have the same number of characters as original one. Type C:\temp\e.rr instead of c:\rdcom.err
Save the file.
Now all should work fine.
Arguably not an answer, but have you looked at using Sweave/knitr to render your presentations in LaTeX using something like Beamer? (As discussed on slide 17 here.)
Wouldn't help any with getting JPGs into a PowerPoint, but would certainly make putting R-output (numerical or graphical) into a presentation much easier!
Edit: if you want to use knitr (which I recommend), here's another reference.
I have about 5 scripts that are all part of a project to be run one after the other. I would like to open the first script, run it and then be prompted at the end, "Do you want to run XrefGenetic.r?" If yes, then XrefGenetic.r should open and run. I am 100% certain R can do this, in fact I think I used to know how but have forgotten and cannot find it anywhere.
How do I open another r script from within an r script?
Are you thinking of source() ?
My usual recommendation is to create a package, as that alleviates all these issues: functions and symbols are known (or hidden if you chose not to export them) and you have generally much better control.
I've been working on trying to automate the complicated process of building source code on a build machine and then transferring the compiled image files over to my embedded ARMv7 device to be flashed. Each step by itself is easy to automate with standard Linux Shell Script, but when trying to do everything in one giant script things get complicated. Thus far I've been using expect-lite to do the work, which is working except now I've run into a problem. When transferring the images over I have expect-lite code that looks like the following:
$imageDestination="/the/destination"
$imageSource="/the/source/"
>sftp $userName'#'$buildMachine
>$password
>get $imageSource'/'x-load_sdcard.bin.ift $imageDestination'/'MLO
>echo "Finished"
>bye
If you know a thing or two about expect-lite, then you'll know that the above variables will be read as "Shell" variables. The problem is that as far as I know SFTP doesn't allow the use of variables. Is there a way to tell expect-lite to use the predefined variables instead of trying to use "Shell" variables? Or, is there some cleaver way to get around this limitation without removing the variables?
All help is greatly appreciated.
Dreligor,
There is no scope issue. Expect-lite variables are all of global scope (as stated in the documentation). I think the problem is that you are using quotes which is making things more difficult. You should try:
$imageDestination=/the/destination
$imageSource=/the/source
>sftp $userName'#'$buildMachine
>$password
>get $imageSource/x-load_sdcard.bin.ift $imageDestination/MLO
>echo "Finished"
>bye
Craig Miller - author and maintainer of expect-lite
After some experimentation it turns out that this is a scope issue. The solution is to simply move the variable declarations down. They need to be declared after the script has connected to the remote machine via sftp. The fixed code is as follows:
>sftp $userName'#'$buildMachine
>$password
$imageDestination="/the/destination"
$imageSource="/the/source/"
>get $imageSource'/'x-load_sdcard.bin.ift $imageDestination'/'MLO
>echo "Finished"
>bye
Hopefully this will help others.