I am using mapply(function,args), for a big dataset. After 100 iterations I need to set a delay for 1 sec. So the question is if it possible to show iteration count or progress bar within mapply (function, args)
Thanks
No, but if you switch to using the corresponding functions from plyr you can add a progress bar to the function call.
Without you giving us a minimal, reproducible example I'm not going to the effort of finding the exact plyr equivalent, but it will be one of the m*ply functions:
> ls(pos=2,pattern="m.*ply")
[1] "maply" "mdply" "mlply" "m_ply"
If you know the total number of iterations in advance, you could just add another argument to mapply as an iteration counter. In this example I added z. This example makes the command line sleep for 1 second every 3 iterations....
mapply( function(x,y,z) { if(z%%3==0){Sys.sleep(1);
cat(paste0( "Interation " , z , " ...sleeping\n") ) }
x*y } ,x=1:10,y=1:10,z=1:10)
#Interation 3 ...sleeping
#Interation 6 ...sleeping
#Interation 9 ...sleeping
# [1] 1 4 9 16 25 36 49 64 81 100
If you need more convincing wrap the statement in system.time(). I get a runtime of 3.002 seconds.
Related
This is an easy question, but I had problems solving it so please don't laugh at me.
I'm given a task to re-create my own function for mean in R instead of using the in-built mean function.
The condition for my function is that I need to use map_dbl to handle any iteration in my function.
I know that mean = (sum of all elements)/(number of elements)
The question is, does anyone knows how to calculate the sum of all elements using map_dbl?
A bit overkill:
x <- c(1:10)
counter <- 0
mapsum <- map_dbl(x, ~{counter <<- counter + .x})
mapsum
[1] 1 3 6 10 15 21 28 36 45 55
tail(mapsum,1)
55
As mentionned in comments, this works but sum/mean is a reduce operation, not a map operation.
Say I have a series of numbers:
seq1<-c(1:20,25:40,48:60)
How can I return a vector that lists points in which the sequence was broken, like so:
c(21,24)
[1] 21 24
c(41,47)
[1] 41 47
Thanks for any help.
To show my miserably failing attempt:
nums<-min(seq1):max(seq1) %in% seq1
which(nums==F)[1]
res.vec<-vector()
counter<-0
res.vec2<-vector()
counter2<-0
for (i in 2:length(seq1)){
if(nums[i]==F & nums[i-1]!=F){
counter<-counter+1
res.vec[counter]<-seq1[i]
}
if(nums[i]==T & nums[i-1]!=T){
counter2<-counter2+1
res.vec2[counter2]<-seq1[i]
}
}
cbind(res.vec,res.vec2)
I have changed the general function a bit so I think this should be a sepparate answer.
You could try
seq1<-c(1:20,25:40,48:60)
myfun<-function(data,threshold){
cut<-which(c(1,diff(data))>threshold)
return(cut)
}
You get the points you have to care about using
myfun(seq1,1)
[1] 21 37
In order to better use is convenient to create an object with it.
pru<-myfun(seq1,1)
So you can now call
df<-data.frame(pos=pru,value=seq1[pru])
df
pos value
1 21 25
2 37 48
You get a data frame with the position and the value of the brakes with your desired threshold. If you want a list instead of a data frame it works like this:
list(pos=pru,value=seq1[pru])
$pos
[1] 21 37
$value
[1] 25 48
Function diff will give you the differences between successive values
> x <- c(1,2,3,5,6,3)
> diff(x)
[1] 1 1 2 1 -3
Now look for those values that are not equal to one for "breakpoints" in your sequence.
Taking in account the comments made here. For a general purpose, you could use.
fun<-function(data,threshold){
t<-which(c(1,diff(data)) != threshold)
return(t)
}
Consider that data could be any numerical vector (such as a data frame column). I would also consider using grep with a similar approach but it all depends on user preference.
I have a matrix with 2 columns as described below:
TIME PRICE
10 45
11 89
13 89
15 12
16 09
17 34
19 89
20 90
23 21
26 09
in the above matrix, I need to iterate through the TIME column adding 5 seconds and accessing the corresponding PRICE that matches the row.
For ex: I start with 10. i need to access 15 (10+5), I would've been able to get to 15 easily if the numbers in the column were continuous data, but its not. so at 15 seconds time, i need to get hold of the corresponding price. and this goes on till the end of the entire data set. my next element that needs to be accessed is 20, and its corresponding price. now i again add 5 seconds and it hence goes on. incase the element is not present, the one immediately greater than it must be accessed to obtain the corresponding price.
If the rows you want to extract are m[1,1]+5, m[1,1]+10, m[1,1]+15 etc then:
m <- cbind(TIME=c(10,11,13,15,16,17,19,20,23,26),
PRICE=c(45,89,89,12,9,34,89,90,21,9))
r <- range(m[,1]) # 10,26
r <- seq(r[1]+5, r[2], 5) # 15,20,25
r <- findInterval(r-1, m[,1])+1 # 4,8,10 (values 15,20,26)
m[r,2] # 12,90,9
findInterval finds the index for values that are equal or less than the given value, so I give it a smaller value and then add 1 to the index.
Breaking the question apart into sub-pieces...
Getting the row with value 15:
Call your Matrix, say, DATA, and
[1] extract the row of interest:
DATA[DATA[,1] == 15, ]
Then snag the second column.
[2] Adding 5 to the first column ( I'm pretty sure you can just do this ):
DATA[,1] = DATA[,1] + 5
This should get you started. The rest seems to just be some funky iteration, incrementing by 5, using [1] to get the price you want each time, swapping 15 for some variable.
I leave the rest of the solution as an exercise to the reader. For tips on looping in R, and more, see the below tutorial ( I don't expect it to be taken down any time soon, but may want to keep a local copy. Good luck :) )
http://www.stat.berkeley.edu/users/vigre/undergrad/reports/VIGRERintro.pdf
As #Tommy commented above, it is not clear what TIME you exactly want to get. For me, it seems like you want to get the PRICE for the sequence 10,15,20,25,... If true, you could do that easily suing the mod (%%) function:
TIME <- c(10,11,13,15,16,17,19,20,23,26) # Your times
PRICE <- c(45,89,89,12,9,34,89,90,21,9) # your prices
PRICE[TIME %% 5 == 0] # Get prices from times in sequence 10, 15, 20, ...
I have a data set with three grouping variables: condition, sub, & delay. Here is a simplified version of my data (real data is much longer)
sub condition delay later_value choiceRT later_choice primeRT cue
10 SIZE 10 27 1832 1 888 CHILD
10 PAST 5 11 298 0 1635 PANTS
10 SIZE 21 13 456 0 949 CANDY
11 SIZE 120 22 526 1 7963 BOY
11 FUTURE 120 27 561 1 4389 CHILDREN
11 PAST 5 13 561 1 2586 SPRING
I have a complicated set of procedures to apply to these data (details are not important)
I wrote the following function that accomplishes what I want when split by the three grouping variables. It returns 3 variables that I am interested in (indiff, p_intercept, & p_lv)
getIndiffs <- function(currdelay){
if (mean(currdelay$later_choice) == 1) {
indiff = 10.5
p_intercept = "laters"
p_lv = "laters"
}
else if (mean(currdelay$later_choice) == 0) {
indiff = 30.5
# no p-val here, code that this was not calculated
p_intercept = "nows"
p_lv = "nows"
}
else {
F <- factor(currdelay$later_choice)
fit <- glm(F~later_value,data=currdelay,family=binomial())
indiff <- -coef(fit)[1]/coef(fit)[2]
if (indiff < 10) indiff = 10.5
else if (indiff > 30) indiff = 30.5
p_intercept = round(summary(fit)$coef[, "Pr(>|z|)"][1],3)
p_lv = round(summary(fit)$coef[, "Pr(>|z|)"][2], 3)
c(indiff,p_intercept,p_lv)
}
I am trying to use ddply to apply it to each subset of the data per the 3 grouping variables:
ddply(data,.(sub,condition,delay),getIndiffs)
However, when I run this I get the error
Error in list_to_dataframe(res, attr(.data, "split_labels")) :
Results do not have equal lengths
Strangely, this works fine when I use only 1 grouping variable but throws the error with 2+
Also, when I "simulate" splitting the dataset myself into a data drame only containing a subset split by the 3 grouping variables, my function works just fine. (Note: I've tried different ways of returning 3 variables or even returning just 1 variable and it does not work, either)
Basically, what I want to know is how to use plyr to use a function to return multiple variables.
Any other solutions to my problem that are fundamentally different are also welcome.
That error usually happens to me when my function applied to one of my pieces returns an empty data frame. In any case, an easy way to debug the situation is use dlply instead of ddply, and examine the output; for instance
x <- dlply(data,.(sub,condition,delay),getIndiffs)
sapply(x,ncol)
to check that they all have the same number of columns. If not, standardize your function more.
It looks like your function getIndiffs is designed to run on a single row, not on a whole dataframe. d*ply(x,vars,fn) hands fn() an entire data frame consisting of the subset of observations matching that group. Hm, also, the function can return in three different places -- at the end of each conditional clause. I think you meant to put c(indiff,p_intercept,p_lv) after the last } (and end your function with another }).
I have a data.frame of cells, values and coordinates. It resides in the global environment.
> head(cont.values)
cell value x y
1 11117 NA -34 322
2 11118 NA -30 322
3 11119 NA -26 322
4 11120 NA -22 322
5 11121 NA -18 322
6 11122 NA -14 322
Because my custom function takes almost a second to calculate individual cell (and I have tens of thousands of cells to calculate) I don't want to duplicate calculations for cells that already have a value. My following solution tries to avoid that. Each cell can be calculated independently, screaming for parallel execution.
What my function actually does is check if there's a value for a specified cell number and if it's NA, it calculates it and inserts it in place of NA.
I can run my magic function (result is value for a corresponding cell) using apply family of functions and from within apply, I can read and write cont.values without a problem (it's in global environment).
Now, I want to run this in parallel (using snowfall) and I'm unable to read or write from/to this variable from individual core.
Question: What solution would be able to read/write from/to a dynamic variable residing in global environment from within worker (core) when executing a function in parallel. Is there a better approach of doing this?
The pattern of a central store that workers consult for values is implemented in the rredis package on CRAN. The idea is that the Redis server maintains a store of key-value pairs (your global data frame, re-implemented). Workers query the server to see if the value has been calculated (redisGet) and if not do the calculation and store it (redisSet) so that other workers can re-use it. Workers can be R scripts, so it's easy to expand the work force. It's a very nice alternative parallel paradigm. Here's an example that uses the notion of 'memoizing' each result. We have a function that is slow (sleeps for a second)
fun <- function(x) { Sys.sleep(1); x }
We write a 'memoizer' that returns a variant of fun that first checks to see if the value for x has already been calculated, and if so uses that
memoize <-
function(FUN)
{
force(FUN) # circumvent lazy evaluation
require(rredis)
redisConnect()
function(x)
{
key <- as.character(x)
val <- redisGet(key)
if (is.null(val)) {
val <- FUN(x)
redisSet(key, val)
}
val
}
}
We then memoize our function
funmem <- memoize(fun)
and go
> system.time(res <- funmem(10)); res
user system elapsed
0.003 0.000 1.082
[1] 10
> system.time(res <- funmem(10)); res
user system elapsed
0.001 0.001 0.040
[1] 10
This does require a redis server running outside R but very easy to install; see the documentation that comes with the rredis package.
A within-R parallel version might be
library(snow)
cl <- makeCluster(c("localhost","localhost"), type = "SOCK")
clusterEvalQ(cl, { require(rredis); redisConnect() })
tasks <- sample(1:5, 100, TRUE)
system.time(res <- parSapply(cl, tasks, funmem))
It will depend on what the function in question is, off course, but I'm afraid that snowfall won't be much of a help there. Thing is, you'll have to export the whole dataframe to the different cores (see ?sfExport) and still find a way to combine it. That kind of beats the whole purpose of changing the value in the global environment, as you probably want to keep memory use as low as possible.
You can dive into the low-level functions of snow to -kind of- get this to work. See following example :
#Some data
Data <- data.frame(
cell = 1:10,
value = sample(c(100,NA),10,TRUE),
x = 1:10,
y = 1:10
)
# A sample function
sample.func <- function(){
id <- which(is.na(Data$value)) # get the NA values
# this splits up the values from the dataframe in a list
# which will be passed to clusterApply later on.
parts <- lapply(clusterSplit(cl,id),function(i)Data[i,c("x","y")])
# Here happens the magic
Data$value[id] <<-
unlist(clusterApply(cl,parts,function(x){
x$x+x$y
}
))
}
#now we run it
require(snow)
cl <- makeCluster(c("localhost","localhost"), type = "SOCK")
sample.func()
stopCluster(cl)
> Data
cell value x y
1 1 100 1 1
2 2 100 2 2
3 3 6 3 3
4 4 8 4 4
5 5 10 5 5
6 6 12 6 6
7 7 100 7 7
8 8 100 8 8
9 9 18 9 9
10 10 20 10 10
You will still have to copy (part of) your data though to get it to the cores. But that will happen anyway when you call snowfall high level functions on dataframes, as snowfall uses the low-level function of snow anyway.
Plus, one shouldn't forget that if you change one value in a dataframe, the whole dataframe is copied in the memory as well. So you won't win that much by adding the values one by one when they come back from the cluster. You might want to try some different approaches and do some memory profiling as well.
I agree with Joris that you will need to copy your data to the other cores.
On the positive side, you don't have to worry about NA's being in the data or not, within the cores.
If your original data.frame is called cont.values:
nnaidx<-is.na(cont.values$value) #where is missing data originally
dfrnna<-cont.values[nnaidx,] #subset for copying to other cores
calcValForDfrRow<-function(dfrRow){return(dfrRow$x+dfrRow$y)}#or whatever pleases you
sfExport(dfrnna, calcValForDfrRow) #export what is needed to other cores
cont.values$value[nnaidx]<-sfSapply(seq(dim(dfrnna)[1]), function(i){calcValForDfrRow(dfrnna[i,])}) #sfSapply handles 'reordering', so works exactly as if you had called sapply
should work nicely (barring typos)