I am running R simulation using a multi-core system. The simulation result I am monitoring is a vector of length 900. My plan was to append this vector (row-wise) in text file (using write.table) once each simulation ends. My simulation run from 1:1000. While I was working on my laptop the result was fine because the work is sequential. When I work i cluster the simulations are divided and there might be a conflict on who to write first. The reason for my claim is I am even getting even impossible values for the first column of my text file (This column used to store the simulation index). If you need a sample code I can attach.
There is no way to write to a text file with parallel threads that respects order. Each thread has it's own buffer and has no indication when it is appropriate to write, because there is no cross communication. So what will happen is they'll all try to the same file at the same time, even in the middle of another thread's write which is why you're getting impossible values in your first column.
The solution is to write to separate files for each thread, or return the value as the output of the multithreaded apply loop. Then combine the results at the end sequentially.
Related
Very new to R and trying to modify a script to help my end users.
Every week a group of files are produced and my modified script, reaches out to the network, makes the necessary changes and puts the files back, all nice and tidy. However, every quarter, there is a second set of files, that needs the EXACT same transformation completed. My thoughts were to check if the files exist on the network with a file.exists statement and then run through script and then continue with the normal weekly one, but my limited experience can only think of writing it this way (lots of stuff is a couple hundred lines)and I'm sure there's something I can do other than double the size of the program:
if file.exists("quarterly.txt"){
do lots of stuff}
else{
do lots of stuff}
Both starja and lemonlin were correct, my solution was to basically turn my program into a function and just create a program that calls the function with each dataset. I also skipped the 'else' portion of my if statement, which works perfectly (for me).
I am concatenating 1000s of nc-files (outputs from simulations) to allow me to handle them more easily in Matlab. To do this I use ncrcat. The files have different sizes, and the time variable is not unique between files. The concatenate works well and allows me to read the data into Matlab much quicker than individually reading the files. However, I want to be able to identify the original nc-file from which each data point originates. Is it possible to, say, add the source filename as an extra variable so I can trace back the data?
Easiest way: Online indexing
Before we start, I would use an integer index rather than the filename to identify each run, as it is a lot easier to handle, both for writing and then for handling in the matlab programme. Rather than a simple monotonically increasing index, the identifier can have relevance for your run (or you can even write several separate indices if necessary (e.g. you might have a number for the resolution, the date, the model version etc).
So, the obvious way to do this that I can think of would be that each simulation writes an index to the file to identify itself. i.e. the first model run would write a variable
myrun=1
the second
myrun=2
and so on... then when you cat the files the data can be uniquely identified very easily using this index.
Note that if your spatial dimensions are not unique and the number of time steps also changes from run to run from what you write, your index will need to be a function of all the non-unique dimensions, e.g. myrun(x,y,t). If any of your dimensions are unique across all files then that dimension is redundant in the index and can be omitted.
Of course, the only issue with this solution is it means running the simulations again :-D and you might be talking about an expensive model to run or someone else's runs you can't repeat. If rerunning is out of the question you will need to try to add an index offline...
Offline indexing (easy if grids are same, more complex otherwise)
IF your space dimensions were the same across all files, then this is still an easy task as you can add an index offline very easily across all the time steps in each file using nco:
ncap2 -s 'myrun[$time]=array(X,0,$time)' infile.nc outfile.nc
or if you are happy to overwrite the original file (be careful!)
ncap2 -O -s 'myrun[$time]=array(X,0,$time)'
where X is the run number. This will add a variable, with a new variable myrun which is a function of time and then puts X at each step. When you merge you can see which data slice was from which specific run.
By the way, the second zero is the increment, as this is set to zero the number X will be written for all timesteps in a given file (otherwise if it were 1, the index would increase by one each timestep - this could be useful in some cases. For example, you might use two indices, one with increment of zero to identify the run, and the second with an increment of unity to easily tell you which step of the Xth run the data slice belongs to).
If your files are for different domains too, then you might want to put them on a common grid before you do that... I think for that
cdo enlarge
might be of help, see this post : https://code.mpimet.mpg.de/boards/2/topics/1459
I agree that an index will be simpler than a filename. I would just add to the above answer that the command to add a unique index X with a time dimension to each input file can be simplified to
ncap2 -s 'myrun[$time]=X' in.nc out.nc
I am writing an R script (using RStudio on Ubuntu 18.04), and at a certain point I need to randomly pick (with no replacement) 123 random numbers between 1 and 1040. I do this with:
myvector[sample(1:1040,X,replace=F)] = 1
where 'myvector' is a 1040-long vector of 0s, and I need to replace 0 with 1 in X=123 random positions.
Every time I copy this exact line of code to the R console and run it, it works as I would expect, picking 123 different numbers at every iteration.
Strangely however, every time I execute the script containing this line of code, it instead picks the same 123 numbers.
At first, I thought this may be due to some trivial object saving/renaming bug, but even if I add
print(sample(1:1040,123,replace=F))
to the script, if picks always the same set of numbers (different from that picked by the code line above, but identical at every iteration).
I haven't invoked set.seed() anywhere within the script and/or ever since I switched the computer on, so I don't understand why it's behaving this way.
Any idea?
Thank you very much
If you have loaded a previous workspace, the seed from that workspace is also loaded. This will give the same results every time you invoke sample.
Include this line
rm(.Random.seed, envir=globalenv())
before you call sample
I have 2 moderate-size datasets that I am using in R. I want to check one dataset if its referenece number matches with the reference numbers in the other dataset and if so, allot a column in the second dataset which contains the value present in the column in the other dataset.
ghi2$state=ifelse(b1$accntnumber %in% ghi2$referencenumber,b1$address,0)
Every time I am running this code, my RStudio hangs up and is unresponsive for a long time. Is it because its taking the time to process the command or is my command wrong.
I am using a 2GB RAM system so I think R hangs up. Should I use the == operator instead of %in%? Would I get the same result?
1. Should I use the == operator instead of %in%?
No (!). See #2.
2. Would I get the same result?
No. The order and position have to match with ==. Also, see #Akrun's comment.
3. How to make it faster and/or deal with RStudio freezing
If RStudio freezes you can save your log file info, send it to the RStudio team who will quickly respond, and also you could bring your log files here for help.
Beyond that, general Big Data rules apply. Here are some tips:
Try data.table
Try it on the command line instead of RStudio
Watch your Resource Monitor (or whatever you use to monitor resources) and observe the memory and CPU usage
If it's a RAM issue you can
a. use a cloud account to get more RAM
b. buy some more RAM (just sayin')
c. use 64-bit R and increase the RAM available to R to its max if it's not already
If it's a CPU issue you can consider parallelization
If any of these ID's are being repeated (and this makes sense in the context of your specific use-case) you can use unique to avoid redundant comparisons
There are lots of other tips you can find in pre-existing Big Data Q&A's on SO as well.
I have an Rscript being called from a java program. The purpose of the script is to automatically generate a bunch of graphs in ggplot and them splat them on a pdf. It has grown somewhat large with maybe 30 graphs each of which are called from their own scripts.
The input is a tab delimited file from 5-20mb but the R session goes up to 12gb of ram usage sometimes (on a mac 10.68 btw but this will be run on all platforms).
I have read about how to look at the memory size of objects and nothing is ever over 25mb and even if it deep copies everything for every function and every filter step it shouldn't get close to this level.
I have also tried gc() to no avail. If I do gcinfo(TRUE) then gc() it tells me that it is using something like 38mb of ram. But the activity monitor goes up to 12gb and things slow down presumably due to paging on the hd.
I tried calling it via a bash script in which I did ulimit -v 800000 but no good.
What else can I do?
In the process of making assignments R will always make temporary copies, sometimes more than one or even two. Each temporary assignment will require contiguous memory for the full size of the allocated object. So the usual advice is to plan to have _at_least_ three time the amount of contiguous _memory available. This means you also need to be concerned about how many other non-R programs are competing for system resources as well as being aware of how you memory is being use by R. You should try to restart your computer, run only R, and see if you get success.
An input file of 20mb might expand quite a bit (8 bytes per double, and perhaps more per character element in your vectors) depending on what the structure of the file is. The pdf file object will also take quite a bit of space if you are plotting each point within a large file.
My experience is not the same as others who have commented. I do issue gc() before doing memory intensive operations. You should offer code and describe what you mean by "no good". Are you getting errors or observing the use of virtual memory ... or what?
I apologize for not posting a more comprehensive description with code. It was fairly long as was the input. But the responses I got here were still quite helpful. Here is how I mostly fixed my problem.
I had a variable number of columns which, with some outliers got very numerous. But I didn't need the extreme outliers, so I just excluded them and cut off those extra columns. This alone decreased the memory usage greatly. I hadn't looked at the virtual memory usage before but sometimes it was as high as 200gb lol. This brought it down to up to 2gb.
Each graph was created in its own function. So I rearranged the code such that every graph was first generated, then printed to pdf, then rm(graphname).
Futher, I had many loops in which I was creating new columns in data frames. Instead of doing this, I just created vectors not attached to data frames in these calculations. This actually had the benefit of greatly simplifying some of the code.
Then after not adding columns to the existing dataframes and instead making column vectors it reduced it to 400mb. While this is still more than I would expect it to use, it is well within my restrictions. My users are all in my company so I have some control over what computers it gets run on.