Order of preprocessing step in mlr package in R - r

Working with already implemented preprocessing Wrappers as well as own Wrappers in mlr, I am wondering in which order the preprocessing steps are computed for the following example?
classif.lrn.net = makePreprocWrapperCaret(classif.lrn.net, ppc.nzv=TRUE, ppc.corr=TRUE, ppc.conditionalX=TRUE, ppc.center=TRUE, ppc.scale=TRUE, ppc.spatialSign=TRUE)
classif.lrn.net = makeSMOTEWrapper(classif.lrn.net)
classif.lrn.net = makeImputeWrapper(learner=classif.lrn.net, classes = list(numeric = imputeMedian(), integer =imputeMedian()))
From the mlr-Tutorial I know that within the caretPreprocWrapper operations are applied in the following order:
near-zero variance filter, correlation filter, imputation, spatial sign.
Moreover, the SMOTE-Wrapper will be proceeded before (because it comes after the caretWrapper in the code).
But when will the immputationWrapper be proceeded?
I think it would be important that the imputation happens before the spatial sign transformation (this order is also implemented in the caretPreprocWrapper). Since I am using my own imputation-Wrapper, I am not sure, if and how I can ensure that the imputation is done in between the different caretPreproc-Steps?

The way you have specified it, the imputation happens at the beginning of the entire process, and only once (i.e. not in between different caret steps). The outermost wrapper is run first and doesn't know anything about the inner workings of the learner it wraps.

Related

What are the rules for ppp objects? Is selecting two variables possible for an sapply function?

Working with code that describes a poisson cluster process in spatstat. Breaking down each line of code one at a time to understand. Easy to begin.
library(spatstat)
lambda<-100
win<-owin(c(0,1),c(0,1))
n.seeds<-lambda*win$xrange[2]*win$yrange[2]
Once the window is defined I then generate my points using a random generation function
x=runif(min=win$xrange[1],max=win$xrange[2],n=pmax(1,n.seeds))
y=runif(min=win$yrange[1],max=win$yrange[2],n=pmax(1,n.seeds))
This can be plotted straight away I know using the ppp function
seeds<-ppp(x=x,
y=y,
window=win)
plot(seeds)
The next line I add marks to the ppp object, it is apparently describing the angle of rotation of the points, I don't understand how this works right now but that is okay, I will figure out later.
marks<-data.frame(angles=runif(n=pmax(1,n.seeds),min=0,max=2*pi))
seeds1<-ppp(x=x,
y=y,
window=win,
marks=marks)
The first problem I encounter is that an objects called pops, describing the populations of the window, is added to the ppp object. I understand how the values are derived, it is a poisson distribution given the input value mu, which can be any value and the total number of observations equal to points in the window.
seeds2<-ppp(x=x,
y=y,
window=win,
marks=marks,
pops=rpois(lambda=5,n=pmax(1,n.seeds)))
My first question is, how is it possible to add a variable that has no classification in the ppp object? I checked the ppp documentation and there is no mention of pops.
The second question I have is about using double variables, the next line requires an sapply function to define dimensions.
dim1<-pmax(1,sapply(seeds1$marks$pops, FUN=function(x)rpois(n=1,sqrt(x))))
I have never seen the $ function being used twice, and seeds2$marks$pop returns $ operator is invalid for atomic vectors. Could you explain what is going on here?
Many thanks.
That's several questions - please ask one question at a time.
From your post it is not clear whether you are trying to understand someone else's code, or developing code yourself. This makes a difference to the answer.
Just to clarify, this code does not come from inside the spatstat package; it is someone's code using the spatstat package to generate data. There is code in the spatstat package to generate simulated realisations of a Poisson cluster process (which is I think what you want to do), and you could look at the spatstat code for rPoissonCluster to see how it can be done correctly and efficiently.
The code you have shown here has numerous errors. But I will start by answering the two questions in your title.
The rules for creating ppp objects are set out in the help file for ppp. The help says that if the argument window is given, then unmatched arguments ... are ignored. This means that in the line seeds2<-ppp(x=x,y=y,window=win,marks=marks,pops=rpois(lambda=5,n=pmax(1,n.seeds)))
the argument pops will be ignored.
The idiom sapply(seeds1$marks$pops, FUN=f) is perfectly valid syntax in R. If the object seeds1 is a structure or list which has a component named marks, which in turn is a structure or list which has a component named pops, then the idiom seeds1$marks$pops would extract it. This has nothing particularly to do with sapply.
Now turning to errors in the code,
The line n.seeds<-lambda*win$xrange[2]*win$yrange[2] is presumably meant to calculate the expected number of cluster parents (cluster seeds) in the window. This would only work if the window is a rectangle with bottom left corner at the origin (0,0). It would be safer to write n.seeds <- lambda * area(win).
However, the variable n.seeds is used later as it it were the number of cluster parents (cluster seeds). The author has forgotten that the number of seeds is random with a Poisson distribution. So, the more correct calculation would be n.seeds <- rpois(1, lambda * area(win))
However this is still not correct because cluster parents (seed points) outside the window can also generate offspring points inside the window. So, seed points must actually be generated in a larger window obtained by expanding win. The appropriate command used inside spatstat to generate the cluster parents is bigwin <- grow.rectangle(Frame(win), cluster_diameter) ; Parents <- rpoispp(lambda, bigwin)
The author apparently wants to assign two mark values to each parent point: a random angle and a random number pops. The correct way to do this is to make the marks a data frame with two columns, for example marks(seeds1) <- data.frame(angles=runif(n.seeds, max=2*pi), pops=rpois(n.seeds, 5))

Exclude specific tensors being updated by optimizer in TensorFlow

I have two graphs, which I suppose to train them independently, which means I have two different optimizers, but at the same time one of them is using the tensor values of the other graph. As a result, I need to be able to stop specific tensors being updated while training one of the graphs. I have assigned two different namescopes two my tensors and using this code to control updates over tensors for different optimizers:
mentor_training_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "mentor")
train_op_mentor = mnist.training(loss_mentor, FLAGS.learning_rate, mentor_training_vars)
mentee_training_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "mentee")
train_op_mentee = mnist.training(loss_mentee, FLAGS.learning_rate, mentee_training_vars)
the vars variable is being used like below, in the training method of mnist object:
def training(loss, learning_rate, var_list):
# Add a scalar summary for the snapshot loss.
tf.summary.scalar('loss', loss)
# Create the gradient descent optimizer with the given learning rate.
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
# Create a variable to track the global step.
global_step = tf.Variable(0, name='global_step', trainable=False)
# Use the optimizer to apply the gradients that minimize the loss
# (and also increment the global step counter) as a single training step.
train_op = optimizer.minimize(loss, global_step=global_step, var_list=var_list)
return train_op
I'm using the var_list attribute of the optimizer class in order to control vars being updated by the optimizer.
Right now I'm confused whether I have done what I supposed to do appropriately, and even if there is anyway to check if any optimizer would only update partial of a graph?
I would appreciate if anyone can help me with this issue.
Thanks!
I have had a similar problem and used the same approach as you, i.e. via the var_list argument of the optimizer. I then checked whether the variables not intended for training stayed the same using:
the_var_np = sess.run(tf.get_default_graph().get_tensor_by_name('the_var:0'))
assert np.equal(the_var_np, pretrained_weights['the_var']).all()
pretrained_weights is a dictionary returned by np.load('some_file.npz') which I used to store the pre-trained weights to disk.
Just in case you need that as well, here is how you can override a tensor with a given value:
value = pretrained_weights['the_var']
variable = tf.get_default_graph().get_tensor_by_name('the_var:0')
sess.run(tf.assign(variable, value))

Taylor diagram from existing Correlation and Standard Dev values

Is it possible to create a Taylor diagram from already calculated correlation and standard deviation values?
I am doing model evaluation, and I have already the correlation and standard deviations values.I understand that there is already a package plotrix where by giving the observation and the modeled values, the diagram is created. However for the type of work that I am doing, it is easier to start by giving already the correlation and standard deviation values.
Is there any way I can do this in R?
There's no reason it shouldn't be possible, but the authors didn't seem to allow for that when they wrote the function. The function is a bit long and complex, but the part that does the calculation is at the top. It is possible to swap out that code and replace it to allow for the passing of summary statistics. Now, keep in mind what i'm about to do is a hack and i've only tested it with versions 3.5-5 of plotrix. Other version may not work.
Here will will create a new function taylor.diagram2 that takes all the code from taylor.diagram but adds in an extra if statement to check for a list of summarized data as the first argument
taylor.diagram2<-taylor.diagram
bl<-as.list(body(taylor.diagram))
cond<-list(
as.name("if"),
quote(is.list(ref) & missing(model)), #condition
quote({R<-ref$R; sd.r<-ref$sd.r; sd.f<-ref$sd.f}), #if true
as.call(c(as.symbol("{"), bl[3:8]))) #else
bl<-c(bl[1:2], as.call(cond), bl[9:length(bl)]) #splice in new code
body(taylor.diagram2)<-as.call(bl) #update function
Now we can test the function. First, we'll do things the standard way
#test data
aref<-rnorm(30,sd=2)
amodel1<-aref+rnorm(30)/2
#standard behavior function
taylor.diagram2(aref,amodel1, main="Standard Behavior"))
#summarized data
xx<-list(
R=cor(aref, amodel1, use = "pairwise"),
sd.r=sd(aref),
sd.f=sd(amodel1)
)
#modified behavior
taylor.diagram2(xx, main="Modified Behavior")
So the new taylor.diagram2 function can do both. If you pass it two vectors, it will do the standard behavior. If you pass it a list with the names R, sd.r, and sd.f, then it will do the same plot but with the values you passed in. Also, the model parameter must be empty for the modified version to work. That means if you want to set any additional parameter, you must use named parameters rather than positional arguments.

Avoiding using global objects when building an R package with multiple separate functions

I have built an R package that runs a complex Bayesian model (Dirichlet Process Mixture model on spatial data) including an MCMC, thinning and validation and interface with Googlemaps. I'm very happy with performance and it runs without problems. The only issue is I would like to get it up on CRAN and it will be rejected because I extensively use global variables.
The package is built around the use of 8 core functions (which the user interacts with):
1) LoadData: Loads in data, extracts key information and sets up a series of global matrices as well as other small list objects.
2) ModelParameters: Sets model parameters, option to plot prior on parameter sigma on Googlemap. Calculates a hyper-prior at this point and saves a large matrix to the global environment
3) GraphicParameters: Sets graphic parameters of maps and plots (see code below)
4) CreateMaps: Creates the prior surface on source location tau and plots the data on a Google map. Keeps a number of global objects saved for repeated plotting of this map.
5) RunMCMC: Runs the bulk of the analysis using MCMC (a time intensive step), creates many global objects.
6) ThinandAnalsye: Thins the posterior samples and constructs the geoprofile (a time intensive step)
7) PlotGP: Plots the data and overlays the geoprofile onto a Google map
8) reporthitscores: OPTIONAL if source data is imported, calculates the hit scores of potential sources
Each one is run in turn before the next, and I pass global variables out which are used by one or more of the other functions.
I built it this way for a reason, as the user must stop and evaluate the results of these functions before rushing ahead to the future ones.
Each of these functions passes not just fixed parameters, but also large map objects, lists and matrices as global objects. I thought it was a nice simple solution with a smooth workflow (you can check the results in your main working environment before moving on, possibly applying transformations etc) and I have given all the objects unique and informative names.
How do I get around this, and pass the checks of CRAN whilst keeping my user friendly workflow of a series of interacting functions?
I dont want to post up a lot of code (as just the MCMC part is several hundred lines long)
But I will include one of the simple examples. GraphicParameters is one of my simple parameter setting functions, that comes with the default values set. This is a simple example, there are much more complex ones in the package. There is a model parameters function that pulls many of the variables from an existing data loading function for example.
GraphicParameters <-
function(Guardrail=0.05, nring=20,transp=0.4,gridsize=640,gridsize2=300,MapType= "roadmap",Location=getwd(),pointcol="black") {
Guardrail<<-Guardrail
nring<<-nring
transp<<-transp
gridsize<<-gridsize
gridsize2<<-gridsize2
MapType<<-MapType
Location<<-Location
pointcol<<-pointcol
}
Most of the material I have seen concerning avoiding global objects resolves around a single function that will do all the work. I want to keep my step by step multi-function approach, but loose the global objects.
Any help would be greatly appreciated.
I understand this may be a major reworking of the code (which is several 1000 lines currently), so I would also love solutions that minimally affect the overall structure of the package.
P.S. I wish I had known about CRANs displeasure with global objects before I started!!!
Your problem is very amenable to OOP-style design. You can use reference classes or S4 to export a single global, e.g., a MapAnalysis class generator. The idea is then that someone creates this using
ma <- new('MapAnalysis', option1 = ..., option2 = ..., ...) # S4
# or
ma <- MapAnalysis$new(option1 = ..., ...) # refClass
and can then call your methods with
ma$loadData(...)
ma$setParameters(...)
with the object doing any bookkeeping of options and auxiliary objects internally. It should not be that much work to refactor. If you read the page I linked to at the top of this post, you should see it's probably possible to just wrap all your functions with a refClass('MapAnalysis', fields = (...), methods = (...)) with few further modifications. (Although it would do you a lot of good down the road to re-think the architecture in OOP terms.)

Accessing class values in R's poLCA

I am trying my hand at learning Latent Component Analysis, while also learning R. I'm using the poLCA package, and am having a bit of trouble accessing the attributes. I can run the sample code just fine:
ds = read.csv("http://www.math.smith.edu/r/data/help.csv")
ds = within(ds, (cesdcut = ifelse(cesd>20, 1, 0)))
library(poLCA)
res2 = poLCA(cbind(homeless=homeless+1,
cesdcut=cesdcut+1, satreat=satreat+1,
linkstatus=linkstatus+1) ~ 1,
maxiter=50000, nclass=3,
nrep=10, data=ds)
but in order to make this more useful, I'd like to access the attributes within the objects created by the poLCA class as such:
attr(res2, 'Nobs')
attr(res2, 'maxiter')
but they both come up as 'Null'. I expect Nobs to be 453 (determined by the function) and maxiter to be 50000 (dictated by my input value).
I'm sure I'm just being naive, but I could use any help available. Thanks a lot!
Welcome to R. You've got the model-fitting syntax right, in that you can get a model out (don't know how latent component analysis works, so can't speak to the statistical validity of your result). However, you've mixed up the different ways in which R can store information pertaining to a model.
poLCA returns an object of class poLCA, which is
a list containing the following elements:
(. . .)
Nobs number of fully observed cases (less than or equal to N).
maxiter maximum number of iterations through which the estimation algorithm was set
to run.
Since it's a list, you can extract individual elements from your model object using the $ operator:
res2$Nobs # number of observations
res2$maxiter # maximum iterations
In some cases, there might be extractor functions to get this information without having to do low-level indexing. For example, many model-fitting functions will have a fitted method, which pulls out the vector of fitted values on the training data; and similarly residuals pulls out the vector of residuals. You should check whether there are such extractor functions provided by the poLCA package and use them if possible; that way, you're not making assumptions about the structure of the model object that might be broken in the future.
This is distinct to getting the attributes of an object, which is what you use attr for. Attributes in R are what you might call metadata: they contain R-specific information about an object itself, rather than information about whatever it is the object relates to. Examples of common attributes include class (the class of an object), dim (the dimensions of an array or matrix), names (names of individual elements of a vector/list/array) and so on.

Resources