Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
New to R. How does one access and edit Rprofile?
On start-up R will look for the R-Profile in the following places:
(https://csgillespie.github.io/efficientR/set-up.html)
R_HOME: the directory in which R is installed.
Find out where your R_HOME is with the R.home() command.
HOME, the user’s home directory. Can ask R where this is with, path.expand("~")
R’s current working directory. This is reported by getwd().
Note although there maybe different R-Profile files R will only use one in a given session. The preference order is:
Current project>Home>R_Home
To create a project-specific start-up script create a .Rprofile file in the project’s root directory.
You can access and edit the different .Rprofile files via
file.edit(file.path("~", ".Rprofile")) # edit .Rprofile in HOME
file.edit(".Rprofile") # edit project specific .Rprofile
There is information about the options you can set via
help("Rprofile")
As mentioned the link above does provide additional details but the points outlined above should show you where the files are and how to access them.
Related
This question already has an answer here:
Karate summary reports not showing all tested features after upgrade to 1.0.0
(1 answer)
Closed 1 year ago.
I've recently discovered the karate framework. Great work! I've extended the standalone jar with custom java helpers to be able to access DB and ssh servers.
I transfer logs and results files from the ssh server to the server in which I run karate.
I'd like to store these files aside the HTML report. But, as long as the test runs, the report folder has a temporary name. It is renamed at then end of the run.
Is there a way to get this temporary name (or path) to be able to copy files into it?
Best regards,
Patrice from France
This is a bit of a surprise for me, because as far as I know the folder is target/karate-reports. Maybe this is some weird thing that happens only on Windows. I'd request you to see if you can debug and contribute code to the project, that would be really great !
Since you are writing Java code and adding it to the classpath I guess, then you should probably use the Runner API that gives you more control, and also the option to disable Karate trying to "backup" the existing old reports folder. This is explained here: https://stackoverflow.com/a/66685944/143475
There is also an option to customize the name of the reports folder: reportDir().
For this kind of control, we recommend using Karate in Java / Maven project fashion, but you can decide what works best for you.
This question already has answers here:
sftp get files with R [closed]
(1 answer)
Schedule weekly Excel file download to an unique name
(1 answer)
Closed 4 years ago.
My company´s data department uploads a .csv file daily to a remote folder that I can only access through my company's network using FileZilla.
Everyday, I use the newest .csv file and process the data in R. I want to automate this process. I want to access the daily .csv file by reading the .csv file from the remote folder using the read.csv function in R.
How can I tell FileZilla to copy the file in the shared folder to a local folder in my PC and do this everyday at 6:00 a.m.? If this isn't possible, how can I access the remote folder through R and read the .csvfile from there?
Thanks in advance!
EDIT:
As seen here, FileZilla does not allow any sort of automation. You can use the client WinSCP instead, write a script to download/upload files from/to a remote SFTP server and schedule the script to run every n days using Windows Task Scheduler.
Now, in order to access an SFTP server from R, you can use the RCrul package. Unfortunately, this closed question (which was closed because it was not even a question to begin with) purges unwanted lines of code from an FTPserver (even though the title says SFTP) and it doesn't specify the user, password nor port specs. Moreover, it uses the write.lines() command, which as I understand, is used to create, not download files.
This question specifically refers to downloading a .csv file from a shared folder using SFTP protocol. Given that FileZilla is no good for this, how can I manage to do this in R using RCurl?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm a little bit confused about process and open file tables.
I know that if 2 processes try to open the same file, there will be 2 entries in the open file table. I am trying to find out the reason for this.
Why there are 2 entries created in the open file table when 2 different processes try to reach the same file? Why it can't be done with 1 entry?
I'm not quite clear what you mean by "file tables". There are no common structures in the Linux kernel referred to as "file tables".
There is /etc/fstab, which stands for "filesystem table", which lists filesystems which are automatically mounted when the system is booted.
The "filetable" Stack Overflow tag that you included in this question is for SQL Server and not directly connected with Linux.
What it sounds like you are referring to when you talk about open files is links. See Hard and soft link mechanism. When a file is open in Linux, the kernel maintains what is basically another hard link to the file. That is why you can actually delete a file that is open and the system will continue running normally. Only when the application closes the file will the space on the disk actually be marked as free.
So for each inode on a filesystem (an inode is generally what we think of as a file), there are often multiple links--one for each entry in a directory, and one for each time an application opens the file.
Update: Here is a quote from the web page that inspired this question:
Each file table entry contains information about the current file. Foremost, is the status of the file, such as the file read or write status and other status information. Additionally, the file table entry maintains an offset which describes how many bytes have been read from (or written to) the file indicating where to read/write from next.
So, to directly answer the question, "Why there are 2 entries created in the open file table when 2 different processes try to reach the same file?", 2 entries are required because they may contain different information. One process may open the file read-only while the other read-write. And the file offset (position within the file) for each process will almost certainly be different.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to change an environment variable $NLS_LANG for Oracle encoding configurations.
I followed the steps:
Open /etc/profile file.
Added "export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P9" line into the file
When I tried;
echo $NLS_LANG
But console printed the older variable, after executing:
. /etc/profile
Console printed correctly last assigned value of the variable.
Main problem with this situation, when I open a new console, and execute echo command, console still prints the old value of $NLS_LANG variable.
So, what is the correct way to persist an environment variable on Solaris ?
Thanks...
ps: Solaris version is -> 5.10.
.profile is only read by a login shell. Thus, you have to start your shell with a - as first argument to force a login shell (or, as #cnicutar suggested, logout/login).
As an alternative, you can put your assignment into a file that is read at "normal" (interactive) invocation, e.g., .kshrc in case of a Korn Shell.
You need to add the export your .profile file in your home directory. Please use
export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P9 in your /home/folder/.profile file. this way everytime you logon, the variable will be configured.
Environment variables in /etc/profile are loaded when you log in with the user. Hence, if you do not log out and log in again the $NLS_LANG will not be loaded.
If you do not want to logon now, What you can do is use EXPORT to make the system load the new value:
NLS_LANG="new value"
export $NLS_LANG
or directly
export NLS_LANG="new value"
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I can't find any reliable file syncing program for my Mac, so I have been using the command line Rsync between two folders.
I have been using "rsync -r source destination".
-Does this sync files both ways, or only sync the source to the destination?
-If a file was previously synced between the two folders, but deleted because it is no longer needed, does it get deleted on both the source and destination, or will it just always get copied to where it is missing from?
No, rsync will synchronise the contents of a remote directory to a local directory. In that respect it is one-way. Optionally you can force it to delete local files that no longer exist in the remote folder.
If you want to keep the most recent changes on both machines, you would have to supply a more complicated rsync incantation and set up both machines as rsync servers. I imagine doing so will get you into trouble eventually, especially if you want to be authoritarian over deletion.
In any case, you can use the -u (or --update) option which will skip any files that are newer on the destination end. You do have to worry about the timestamps, and this will not handle any conflicts or merges. Still... It may be as simple as:
rsync -u -r target1 target2
rsync -u -r target2 target1
That won't do anything about deletion. You have no way of knowing that a missing file on one target was deleted there instead of a new file having been created on the other target.
This is why version control was invented... And for people who are scared of version control, services like DropBox exist.
Answering original question :
1)It synchronizes files only in one direction depending on pull or push mechanism .
for push and pull mechanism , see manual page by ""man rsync"".
so, for the rest of your question don't assume that it works in both
ways.
2)The file only gets deleted on destination directory.
get more details on this in rsync --help command ,see option
--delete which delete extraneous files from destination dirs
and others options for delete.
3) The missing files will be copied just to destination directory if you are pushing files on remote machine/directory/
a sample example for push mechanism :-
rsync -avz /home/local_dir/abc.txt remoteuser#192.168.xx.xx:/home/remoteuser/
if a file named abc.txt is already present in destination directory then it will be updated depending upon it is old version of abc.txt on local side or not.
And if abc.txt is not present on remote directory , a total new file named abc.txt will be created with contents of local version of abc.txt
a sample example for pull mechanism :-
rsync -avz remoteuser#192.168.xx.xx:/home/remoteuser/abc.txt /home/local_dir/
if a file named abc.txt is already present in local directory then it will be updated depending upon it is old version of abc.txt on remote side or not.
And if abc.txt is not present on local directory , a total new file named abc.txt will be created with contents of local version of abc.txt