Running R from CD-ROM - r

Can R be run from a CD-ROM drive? The computer is a stand-alone (no network or Internet connection) and I can't install anything on it, nor can I use a flash drive.
Thanks.

What do you mean by "can't install"?
You don't need to install R, you can just run it from a folder copied from somewhere else. If you have hard disk storage on the PC then you can copy C:\Program Files\R from one machine onto a CD-ROM, then take the CD-ROM to the cripplebox, copy it to wherever you store your files and run it from there. Worst case scenario is you have to change the R_HOME environment variable. Works for Linux and Windows (you didnt say what OS you are on).
...unless your sysadmins have disabled executable permissions for your hard disk storage. Which is a real BOFH thing to do.
...but if they've done that I'd also suspect they've disabled executables from CD-ROM too.
...and if you don't have any writable hard disk storage, how the heck are you going to do any analysis?
...the real fix may be to kick the sysadmins until you tell them you can't do your job without R installed on the machine.

You may have trouble with packages, but otherwise, the instructions for installing R on a USB key should be pertinent.

Related

R Studio incredibly slow only after I connect to my work VPN?

I have R Studio on Windows. It works fine before I connect to the internet through my VPN. After I connect commands start to hang, autocomplete can take forever, and simple operations like 4 + 4 can take one minute or more.
I have a feeling the studio is making connections under the hood. I would like to disable all of these connections no matter what.
I have experienced the same thing - I am assuming your actual work is on a file server and not local. I found the culprit in my case to not be RStudio directly, but rather the RStudio project file. RStudio will create a generally small, hidden folder in the directory named .Rproj.user with some settings. This, living on the file server, caused constant read/writes through my VPN connection.
I unfortunately had to either (a) move projects off the file server into a local copy (not bad since I can use a company GitLab), or (b) delete the .Rproj and .Rproj.user folders from the project directory on the server and use here::here() or something like that as a work around in my workflow.
As another possibility, I have seen installations of R itself done onto a personal server drive instead of locally. This has been done to avoid needing administrator privileges to install. This is not a great idea and can also result in extremely slow performance through a VPN connection. You can check to see where R is installed as well. Sounds like this is not the problem though based on what you described.
Maybe it is something else, but this is what I found last week for me based on a very similar experience.

Unable to access internet within "R" on cmd behind proxy

I have been using R on commandline (BASH). I am unable to access the internet (download any packages). I have tried proxy system wide, and tested it with wget, which works. The "install.packages()" command however does not.
Per some user's advice, I also tried setting the proxy in .Rprofiles file. That didn't help either. Please advice.
I recently ran into the same issue on my work machine. Our Firm uses Cylance as its antivirus software. Cylance was quarantining the file "internet.dll" that R uses to access the Internet. Fortunately, however, it only does so in the 32-bit version of R. For me, there were two solutions:
First, I was able to download packages directly from the 32-bit version of R (outside of RStudio). This works fine. The downloaded packages will run in 64-bit RStudio.
The longer-term solution was to submit an IT service request to release this file from quarantine (that is, to "whitelist a blocked entity"). At my Firm this was promptly done, as there is (obviously) nothing unsafe about this R file.

Very slow debugging

I've cross compiled Qt and created SD card image and mounted using losetup. Compiation is much faster now compared to direct sshfs mount. Application runs OK. Now, I want to debug which is dead slow and it appears like it is copying the files back to the dev machine for debugging. I see this suggestion:
File transfers from remote targets can be slow. Use "set sysroot" to access files locally instead.
I'm using gdb-multiarch and have got gdbserver (on target board).
I'm kind of lost here. Where to set this option? I've supplied --sysroot argument to the binary but no use. Any help is really appreciated.
Update: using Qt Creator for the development.
sysroot is a gdb setting. You can set it in gdb with the set sysroot command. For example:
(gdb) help set sysroot
Set an alternate system root.
The system root is used to load absolute shared library symbol files.
For other (relative) files, you can add directories using
`set solib-search-path'.
This setting controls how gdb tries to find various files it needs, and in particular the executable and shared libraries that you are debugging.
Recent versions of gdb default sysroot to target:, which means "fetch the files from the target". If you're debugging locally, this is just local filesystem access; but if you are debugging remotely and have a slow connection, this can be a bit painful. In order to make this faster, the idea is to keep a local copy of all the files you'll need, and then use set sysroot to point gdb at this local copy.
The main issue with this approach is that if your local copy is out of sync with the remote, you can end up confusing gdb and getting nonsense results. I am not certain but maybe enabling build-ids alleviates this problem somewhat (certainly in theory gdb can detect build-id mismatches and warn, I just don't recall whether it actually does).
As Tom Tromey suggested adding set sysroot {my sysroot local path} as a starting command in the debugger has worked for me.

Cygwin SVN: E200030: SQLite disk I/O error

When I use Subversion in Cygwin to update some repository, some directories update with success, while some other one gets a failure with the error message:
svn: E200030: sqlite: disk I/O error
When doing svn update again for the same repository, a different directory can get the same error. Sometimes, there is a SVN instruction after the above error message.
This happened due to a change someone wanted in Cygwin's SQLite package. I was the maintainer of that package when this question was asked, and I made the change that caused this symptom.
The change was released as Cygwin SQLite version 3.7.12.1-1, and it fixed that one person's problem, but it had this bad side effect of preventing Cygwin's Subversion package from cooperating with native Windows Subversion implementations.
What Happened?
The core issue here is that Subversion 1.7 changed the working copy on-disk format. Part of that change involves a new SQLite database file, .svn/wc.db. Now, in order to implement SQLite's concurrency guarantees, SQLite locks the database file while it is accessing it.
That's all fine and sensible, but you run into a problem when you try to mix Windows native and POSIX file locking semantics. On Windows, file locking almost always means mandatory locking, but on Linux systems — which Cygwin is trying to emulate — locking usually means advisory locking instead.
That helps understand where the "disk I/O error" comes from.
The Cygwin SQLite 3.7.12.1-1 change was to build the library in "Unix mode" instead of "Cygwin mode." In Cygwin mode, the library uses Windows native file locking, which goes against the philosophy of Cygwin: where possible, Cygwin packages call POSIX functions instead of direct to the Windows API, so that cygwin1.dll can provide the proper POSIX semantics.
POSIX advisory file locking is exactly what you want with SQLite when all the programs accessing the SQLite DBs in question are built with Cygwin, which is the default assumption within Cygwin. But, when you run a Windows native Subversion program like TortoiseSVN alongside a pure POSIX Cygwin svn, you get a conflict. When the TortoiseSVN Windows Explorer shell extension has the .svn/wc.db file locked with a mandatory lock and Cygwin svn comes along and tries an advisory lock on it, it fails immediately. Cygwin svn assumes a lock attempt will either succeed immediately or block until it can succeed, so it incorrectly interprets the lock failure as a disk I/O error.
How Did We Solve This Dilemma?
Within Cygwin, we always try to play nice with Windows native programs where possible. The trick was to find a way to do that, while still playing nice with Cygwin programs, too.
Not everyone agreed that we should attempt this. "Cygwin SQLite is part of Cygwin, so it only needs to work well with other Cygwin programs," one group would say. The counterpartisans would reply, "Cygwin runs on Windows, so it has to perform well with other Windows programs."
Fortunately, we came up with a way to make both groups happy.
As part of the Cygwin SQLite 3.7.17-x packaging effort, I tested a new feature that Corinna Vinschen added to cygwin1.dll version 1.7.19. It allowed a program to request mandatory file locking through the BSD file locking APIs. My part of the change was to make Cygwin SQLite turn this feature on and off at the user's direction, allowing the same package to meet the needs of both the Cygwin-centric and Windows-native camps.
This Cygwin DLL feature was further improved in 1.7.20, and I released Cygwin SQLite 3.7.13-3 using the finalized locking semantics. This version allowed a choice of three locking strategies: POSIX advisory locking, BSD advisory locking, and BSD/Cygwin mandatory locking. So far, the latter strategy has proven to be completely compatible with native Windows locking.
Later, when Jan Nijtmans took over maintenance of Cygwin SQLite, he further enhanced this mechanism by fully integrating it with the SQLite VFS layer. This allowed a fourth option: the native Windows locking that Cygwin SQLite used to use before we started on this journey. This is mostly a hedge against the possibility that the BSD/Windows locking strategy doesn't cooperate cleanly with a native Windows SQLite program. So far as I know, no one has ever needed to use this option, but it's nice to know it's there.
Alternate Remedy
If the conflict you're having is between Cygwin's command line svn and the TortoiseSVN Windows Explorer shell extension, there's another option to fix it. TortoiseSVN ships with native Windows Subversion command-line programs as well. If you put these in your PATH ahead of Cygwin's bin directory, you shouldn't run into this problem at all.
Having encountered the same problem, it appears (in my case at least) to be an interaction with TortoiseSVN. Disabling TortoiseSVN's status icon cache (Settings > Icon Overlays > Status cache "None" > Apply) has everything working just fine for me.
(That obviously doesn't resolve the underlying problem, which appears to be due to the SQL package that Cygwin's Subversion package relies on changing its mode of access. As I write, there's active [if slow] discussion on the Cygwin mailing list about how to resolve this.)
ldd /usr/bin/svn shows that SVN depends on /usr/bin/cygsqlite3-0.dll.
After I change libsqlite3 from 3.7.12 back to 3.7.3, the problem seems to go away. So this may be a SQLite library problem.
Using TortoiseSVN, ticking off Refresh shell overlays at clean up solved the problem for me.
For others reference, I just had this same error (svn: E200030: sqlite: disk I/O error) and found that one of my log files was taking up all my space (and could not write to the HDD because there was no free space).
Run (to make sure you have enough disk space)
df -h
(If you don't delete some large files (I just removed some backup and log files)
Then I just needed to run:
svn cleanup
This resolved the error for me.

Running Visual Studio in Parallels for mac - problem with debugging sites sitting in os x drive

I've installed parallels desktop on my MacBook to be able to run Visual Studio 2008 in a XP installation. Everything works great except when I decided to put my websites in my sites folder in the os x file system (Which by default automatically happens because the My Documents folder is mapped to the Mac's Documents folder, and I'd rather put my code there so that both OS's can easily access it.).
When trying to build or debug I get this error:
Failed to start monitoring changes to 'Z:\xxx...'
How do I get it so that I can get it to work under Parallels, from the shared drive?
Parallels uses network drives to simulate folders on OS X, and Windows can't monitor changes to network drives, so if you do this directly, it'll be broken.
If you want to keep them in sync though, use Live Mesh (http://www.mesh.com) and install it on both the host and guest. A little roundabout, but it'll make it so both copies are maintained (and Live Mesh is handy for other things too)
I recently flipped over to putting my source code onto my Mac volume, so I could use Time Machine to back it up and immediately got this same problem with my ASP.NET app. Other, procedural applications, built just fine, by the way.
I tried all sorts of things, including using Samba on the Mac side to share the directory, which led into the "too many BIOS commands" error described elsewhere. Unfortunately for me, the Registry hacks to fix that problem never worked for some reason.
I finally found another solution that avoids Samba and just uses the regular Parallels Shared Folders. It too is a Registry hack, but this one simply turns off file change monitoring for ASP.NET. It is a bit heavy-handed, but gets my builds to work again.
The reference for this change is here:
http://support.microsoft.com/kb/911272
The downside to this approach, I am finding, is that you need to be more deliberate about recompiling, or restarting the web server, as changes during development don't just magically appear anymore. I am still deciding whether that is a useful tradeoff.
UPDATE: After several days of this, development was just too difficult and, sadly, what I reverted to was keeping my source inside the Parallels virtual disk. To enable Time Machine backups and Spotlight searches, I used a lightweight MS utility called SyncToy to push stuff out of Parallels and out to my Mac drive several times a day. Despite the high hack factor, it is working well.
I know this isnt strictly a solution but VMware fusion is superior when it comes to shared drive space on a virtual machine. Its what i currently use and hasn't let me down thus far...
People always give me odd looks when they see visual studio on my mac :P
Try moving the project on to the VMs C drive. Its not an ideal situation, but you can access the VMs C drive from OS X.
I have a similar problem with a php site that uses an MS Access database (its a clients system). I have alias's that point to the php site on the VM so that I can still do all of my coding in OS X. To do this I created a network share on the VM and then connected to it from OS X. Once connected make the alias's. If the network drive is not open and you open a file in OS X it will try to reconnect. It means the VM will need to be running to get to the files, but this isn't normally a problem since the VM is hosting the site anyways.
.NET has funny issues trying to debug the objects on a network drive.
make sure that you have full trust on your local network between your Mac and XP install.
Check out: http://msdn.microsoft.com/en-us/library/aa302361.aspx
If at the end of that research, I"m afraid you will have to look into the option of keeping it on the VMDisk and moving it when you need it.
I see a similar problem on my machine connected to the windows domain. My documents is mapped to a network share and I can't debug|run|etc. I had to eventually move to my local disk for debugging.
I definately recommend Live Mesh as a way to keep directories in sync. Just keep the VM's directory in sync with the Mac's directory.
Or use SVN to hold copies in both machines and do commit/update as appropriate. That way you get versioning, history and if your project grows bigger, you can share with other devs.
I know dropbox also has history and sharing, but not check in/check out/conflicts and all the other advantages of a real source control.
Oh, if you have money you can also go for TFS. I would but it is just too expensive :)

Resources