I want a r function which makes my loop to run after evey 5 mins.
I have a loop that downloads market data from google finance.I want this loop to run in the interval of every 30 mins.
Is it possible?
An alternative to making your script loop: use an external job scheduling tool to call your script over the desired interval. If you have linux, I recommend checking out cron. Here's a SO response describing how to set up a cron job to kick off an R script: https://stackoverflow.com/a/10116439/819544
You can use Sys.sleep(100) to stop execution for 100 seconds. It's a little inefficient vs. running some other process in the same instance and setting up a proper timer. But it's pretty easy.
Related
I'm downloading some kind of data from a webserver which limites the number of queries to 100 per hour.
Do you know an effective way to insert a time lag between r-script code lines to automatically run the script and gather the data after (approx.) 10 hours?
Many thanks in advance!
I need create an empty loop that runs for a given time, for example 2 hours. The loop just runs for nothing, no matter what it does, it is important that it loads R executions for exactly 2 hours.
for example, let's have some kind of script
model=lm(Sepal.Length~Sepal.Width,data=iris)
after this row there is an empty loop that does something for exactly 2 hours
for i....
after the empty loop has completed via 2 hours, continue to execute subsequent rows
summary(model)
predict(model,iris)
(no matter what row, it is important that in a certain place of code the loop wasted for 2 hours)
How it can be done?
Thanks for your help.
There is no need to do this using a loop.
You can simply suspend all execution for n seconds by using Sys.sleep(n). So to suspend for 2 hours you can use Sys.sleep(2*60*60)
Very new to R and trying to modify a script to help my end users.
Every week a group of files are produced and my modified script, reaches out to the network, makes the necessary changes and puts the files back, all nice and tidy. However, every quarter, there is a second set of files, that needs the EXACT same transformation completed. My thoughts were to check if the files exist on the network with a file.exists statement and then run through script and then continue with the normal weekly one, but my limited experience can only think of writing it this way (lots of stuff is a couple hundred lines)and I'm sure there's something I can do other than double the size of the program:
if file.exists("quarterly.txt"){
do lots of stuff}
else{
do lots of stuff}
Both starja and lemonlin were correct, my solution was to basically turn my program into a function and just create a program that calls the function with each dataset. I also skipped the 'else' portion of my if statement, which works perfectly (for me).
I have a very big file with few hundreds million rows. I am trying to use tail command to see a part of my file but it takes so much time. is there any option to use to reduce the time of running?
here is my problem:
I have 3 R scripts which take a lot of time to run. These scripts run each different data. I would like to run them in parallel. One solution is to run them in 3 different R sessions or 3 different projects as mentionned here or in 3 different batch. The problem is that I need to gather the data returned by these 3 scripts once they have run and complete the rest of computation in a fourth script that takes a lot of time as well. It would need to be done under Windows.
Is it possible to run these 3 scripts in parallel and return the ouput in one workspace to complete the computation with the fourth automatically? If yes, how?
Let's suppose that the answer to question 1 is yes. The fourth script output 3 variables. Is it possible to use them as input to the 3 scripts and loop until a condition in the 4th script is verified?
As an example you can assume the following scripts:
Script1.R
Sys.sleep(10)
a <- 3
Script2.R
Sys.sleep(10)
b <- 5
Script3.R
Sys.sleep(10)
c <- 56
In reality my scripts take 1 hour each to run. I would need to run the 3 above scripts in parallel, retrieve the 3 outputs a,b and c in the same workspace to continue the computation. So instead of having to wait 10 seconds for each scripts to compute I would wait only 10 seconds for the 3 scripts to compute. Alternatively you can take any 3 scripts of your choice that takes a lot of time to compute.
Thank you.