I am trying to execute a command from a different directory but I keep getting a "No such file or directory" response. I have been stuck for about 6 hours now and cannot figure it out. I am very new so please take it easy.
I created a directory (Learning) with a subdirectory (fileAsst), and two subdirectories (Earth, Galaxy) within "fileAsst." I am trying to execute a separate file to check to see if I have built the desired directory correctly.
I type in ~unix/bin/fileAsst-1 to try and execute to my directory. But it just is not working. Please help.
Learning/fileAsst/Earth/zaphod.txt
Learning/fileAsst/Galaxy/trillian.txt
~unix/bin/fileAsst-1 (what I'm trying to execute to check my Learning directory)
My guess, based upon your usage of the phrase "execute to my directory", you need to read something like this.
I cannot be sure but it appears as though you want to change your current directory to some other directory using the cd command. Having set your current working directory appropriately you wish to execute some command.
But a guess is only a guess and it's somewhat dangerous to give an answer when unfamiliar nouns and verbs are used in the question.
Related
For example, I got in the same file like script1.sh , script2.sh then I have an output.vcf (bioinformatics stuff but is guess it doesnt matter). Then I am sure one of those scripts created the output file but i don't know which of them.
Is there any way to figure it out?
Thank you!
IMHO post factum you can't get this information. But each UNIX have own audit subsystem and if you activate it you can get which file operation (in this case file creation) is done by which program (shell script).
Actually there is a way. You can browse the scripts and search for the filename in question. There will be a problem if both scripts have this filename.
I face the problem with changing of working directory.
Here are my steps to reproduce it:
Start R session (The working directory matches the one set in the settings)
Change current working directory and start coding
Start one more new Session (The working directory for the second session becomes the one used in the previous session but not the one set in the settings)
This is serious problem if i work with folders which contain many files as new session just fails to start
You can use the following at the beginning of each script:
# set the R scripts working directory
library(rstudioapi)
current_path <- getActiveDocumentContext()$path
if (getwd() != current_path){
setwd(dirname(current_path ))
}
This piece of code will define the working directory as the location of the script, it's really useful when working with multiple scripts that has lots of dependencies and they fail due to wrong working directory
EDIT
After understanding better the desired behaviour from the comments, you should wrap you code in a project hierarchy and set the the default working directory of the project to the desired one.
The code I added above is suitable for the case you want your scripts to run regardless of the session working directory
Disclaimer: I am very new here.
I am trying to learn R via RStudio through a tutorial and very early have encountered an extremely frustrating issue: when I am trying to use the read.table function, the program consistently reads my files (written as "~/Desktop/R/FILENAME") as going through the path "C:/Users/Chris/Documents/Desktop/R/FILENAME". Note that the program is considering my Desktop folder to be through my documents folder, which is preventing me from reading any files. I have already set and re-set my working directory multiple times and even re-downloaded R and RStudio and I still encounter this error.
When I enter the entire file path instead of using the "~" shortcut, the program is successfully able to access the files, but I don't want to have to type out the full file path every single time I need to access a file.
Does anyone know how to fix this issue? Is there any further internal issue with how my computer is viewing the desktop in relation to my other files?
I've attached a pic.
Best,
Chris L.
The ~ will tell R to look in your default directory, which in Windows is your Documents folder, this is why you are getting this error. You can change the default directory in the RStudio settings or your R profile. It just depends on how you want to set up your project. For example:
Put all the files in the working directory (getwd() will tell you the working directory for the project). Then you can just call the files with the filename, and you will get tab completion (awesome!). You can change the working directory with setwd(), but remember to use the full path not just ~/XX. This might be the easiest for you if you want to minimise typing.
If you use a lot of scripts, or work on multiple computers or cross-platform, the above solution isn't quite as good. In this situation, you can keep all your files in a base directory, and then in your script use the file.path function to construct the paths:
base_dir <- 'C:/Desktop/R/'
read.table(file.path(base_dir, "FILENAME"))
I actually keep the base_dir assignemnt as a code snippet in RStudio, so I can easily insert it into scripts and know explicitly what is going on, as opposed to configuring it in RStudio or R profile. There is a conditional in the code snippet which detects the platform and assigns the directory correctly.
When R reports "cannot open the connection" it means either of two things:
The file does not exist at that location - you can verify whether the file is there by pasting the full path echoed back in the error message into windows file manager. Sometimes the error is as simple as an extra subdirectory. (This seems to be the problem with your current code - Windows Desktop is never nested in Documents).
If the file exists at the location, then R does not have permission to access the folder. This requires changing Windows folder permissions to grant R read and write permission to the folder.
In windows, if you launch RStudio from the folder you consider the "project workspace home", then all path references can use the dot as "relative to workspace home", e.g. "./data/inputfile.csv"
I am using scalapact for CDC test.
My tests are running fine and the pact file is generated under target>pacts folder.
I have another folder "files" where I want those pact files to be generated after running the pact tests.
Is there any way I configure the default path for pact files?
This is an area that needs some attention in Scala-Pact, however, someone kindly did a PR for us a while ago that lets you set an environment variable called pact.rootDir.
In practice, on linux/mac that variable is a bit tricky to set because of the ., so exporting it or just using -Dpact.rootDir="<my desired path>" in the command arguments doesn't seem to work. Instead, you need to do this:
env "pact.rootDir=<my desired path>" bash. I haven't tried this on Windows so I don't know if you'd have the same issue.
I've just raised an issue to try and make this easier in the future:
https://github.com/ITV/scala-pact/issues/101
As an alternative, note that the pact directory is really kind of a scratch/tmp area to allow Scala-Pact to compile it's output. If you're running this as part of a build script, you may just want to add a step to copy the assets to a new location once they've been generated.
Also, for some reason we made reading from a directory way easier than writing to one. If you need to read from a dir such as during verification, you can just add --source <my desired path> on the command line.
Hope that helps.
On my last UNIX setup, I was able to simply type a binary's name if I was in the same directory and it would execute it. However on this new setup, I have to preface binary names with ./ if I want to execute them. Anyone know how to circumvent this?
Thanks.
The conventional way to address this (and probably the way it was done on your previous setup) is to add . to your PATH environment variable. So if your PATH is /usr/bin:/bin, then add . to the end (along with the : separator) so you have /usr/bin:/bin:.. Exactly how to do that varies by shell. A quick Google will no doubt get you the answer for your shell.
Do be aware that there are potential negative security implications to that, though, especially on a shared service. If an attacker manages to get an evil file in a directory where you are, and to name that file a normally-innocuous command (like ls), they could cause you to unintentionally run the evil file.
For this reason, if you are going to do this, at least make sure you put . as the last item in your PATH.
I'm guessing you are using the default shell and that the shell is bash.
Edit: /etc/bashrc and add this:
export set PATH=$PATH:.