I have a problems with spdep(). Starting with a matrix of non-missing distances produced by a function
dist_m <- geoDistMatrix(data1, group = 'fips_dist')
dist_m[upper.tri(dist_m)] <- t(dist_m)[upper.tri(dist_m)]
we then turn into weights with linear inverse
max_dist <- max(dist_m)
w1 <- (max_dist + 1 - dist_m)/(max_dist + 1)
and now
lw <- mat2listw(w1, row.names = rownames(w1), style = 'M')
I check to make sure no missing weights:
any(is.na(lw$weights))
and since there aren't, go ahead with:
errorsarlm(cvote ~ inc, data = data1, lw, method = 'eigen', quiet = F, zero.policy = TRUE)
leads to the following error:
Error in subset.listw(listw, subset, zero.policy = zero.policy) :
Not yet able to subset general weights lists
This is because at least one observation in data1 is not complete, i.e. has missing values. Hence, errorsarlm wants to subset the data, i.e. restrict to complete cases. But it can't do it now - that's what the error message says.
Best is to subset the data manually or correct the incomplete cases.
This is because the spdep function created a listw object only for non-general weights by default. Set zero.polcy=TRUE beform you perform mat2listw or nb2listw function so that it consider non-neighbors that have zero value.
Related
New to stackoverflow. I'm working on a project with NHIS data, but I cannot get the svyglm function to work even for a simple, unadjusted logistic regression with a binary predictor and binary outcome variable (ultimately I'd like to use multiple categorical predictors, but one step at a time).
El_under_glm<-svyglm(ElUnder~SO2, design=SAMPdesign, subset=NULL, family=binomial(link="logit"), rescale=FALSE, correlation=TRUE)
Error in eval(extras, data, env) :
object '.survey.prob.weights' not found
I changed the variables to 0 and 1 instead:
Under_narm$SO2REG<-ifelse(Under_narm$SO2=="Heterosexual", 0, 1)
Under_narm$ElUnderREG<-ifelse(Under_narm$ElUnder=="No", 0, 1)
But then get a different issue:
El_under_glm<-svyglm(ElUnderREG~SO2REG, design=SAMPdesign, subset=NULL, family=binomial(link="logit"), rescale=FALSE, correlation=TRUE)
Error in svyglm.survey.design(ElUnderREG ~ SO2REG, design = SAMPdesign, :
all variables must be in design= argument
This is the design I'm using to account for the weights -- I'm pretty sure it's correct:
SAMPdesign=svydesign(data=Under_narm, id= ~NHISPID, weight= ~SAMPWEIGHT)
Any and all assistance appreciated! I've got a good grasp of stats but am a slow coder. Let me know if I can provide any other information.
Using some make-believe sample data I was able to get your model to run by setting rescale = TRUE. The documentation states
Rescaling of weights, to improve numerical stability. The default
rescales weights to sum to the sample size. Use FALSE to not rescale
weights.
So, one solution maybe is just to set rescale = TRUE.
library(survey)
# sample data
Under_narm <- data.frame(SO2 = factor(rep(1:2, 1000)),
ElUnder = sample(0:1, 1000, replace = TRUE),
NHISPID = paste0("id", 1:1000),
SAMPWEIGHT = sample(c(0.5, 2), 1000, replace = TRUE))
# with 'rescale' = TRUE
SAMPdesign=svydesign(ids = ~NHISPID,
data=Under_narm,
weights = ~SAMPWEIGHT)
El_under_glm<-svyglm(formula = ElUnder~SO2,
design=SAMPdesign,
family=quasibinomial(), # this family avoids warnings
rescale=TRUE) # Weights rescaled to the sum of the sample size.
summary(El_under_glm, correlation = TRUE) # use correlation with summary()
Otherwise, looking code for this function's method with 'survey:::svyglm.survey.design', it seems like there may be a bug. I could be wrong, but by my read when 'rescale' is FALSE, .survey.prob.weights does not appear to get assigned a value.
if (is.null(g$weights))
g$weights <- quote(.survey.prob.weights)
else g$weights <- bquote(.survey.prob.weights * .(g$weights)) # bug?
g$data <- quote(data)
g[[1]] <- quote(glm)
if (rescale)
data$.survey.prob.weights <- (1/design$prob)/mean(1/design$prob)
There may be a work around if you assign a vector of numeric values to .survey.prob.weights in the global environment. No idea what these values should be, but your error goes away if you do something like the following. (.survey.prob.weights needs to be double the length of the data.)
SAMPdesign=svydesign(ids = ~NHISPID,
data=Under_narm,
weights = ~SAMPWEIGHT)
.survey.prob.weights <- rep(1, 2000)
El_under_glm<-svyglm(formula = ElUnder~SO2,
design=SAMPdesign,
family=quasibinomial(),
rescale=FALSE)
summary(El_under_glm, correlation = TRUE)
I have 9,150 polygons in my dataset. I was trying to run a spatial autoregressive model (SAR) in spdep to test spatial dependence of my outcome variable. After running the model, I wanted to examine the direct/indirect impacts, but encountered an error that seems to have something to do with the length of neighbors in the weights matrix not being equal to n.
I tried running the very same equation as SLX model (Spatial Lag X), and impacts() worked fine, even though there were some polygons in my set that had no neighbors. I Googled and looked at spdep documentation, but couldn't find a clue on how to solve this error.
# Defining queen contiguity neighbors for polyset and storing the matrix as list
q.nbrs <- poly2nb(polyset)
listweights <- nb2listw(q.nbrs, zero.policy = TRUE)
# Defining the model
model.equation <- TIME ~ A + B + C
# Run SAR model
reg <- lagsarlm(model.equation, data = polyset, listw = listweights, zero.policy = TRUE)
# Run impacts() to show direct/indirect impacts
impacts(reg, listw = listweights, zero.policy = TRUE)
Error in intImpacts(rho = rho, beta = beta, P = P, n = n, mu = mu, Sigma = Sigma, :
length(listweights$neighbours) == n is not TRUE
I know that this is a question from 2019, but maybe it can help people dealing with the same problem. I found out that in my case the problem was the type of dataset, your data=polyset should be of type "SpatialPolygonsDataFrame". Which can be achieved by converting your data:
polyset_spatial_sf <- sf::as_Spatial(polyset, IDs = polyset$ID)
Then rerun your code.
Firstly, I constructed a model by
cf1 <- cforest(y~., data = DATA, strata = DATA$y,
ntree = 200L, mtry = 10)
Here considering the dataset is very imbalanced (y=1 takes 7% of the whole observations), so I add strata here to make sure observations with y=1 are not ignored in bagging. cf1 works normally, in terms of the confusion matrix. However, when I tried to implement feature selection by
cf1.imp_cond <- varimp(cf1, conditional = TRUE)
It returns
Error in x[strata == s] <- .resample(x[strata == s]) :
NAs are not allowed in subscripted assignments
I can't figure out what does this error mean. Someone met this before?
----update
Here is an manipulated test data from the original dataset I am using. Here is the code
cf2 <- cforest(X5_years_survival~., data = test, strata = X5_years_survival,
ntree = 200L, mtry = 6)
cf2.imp_cond <- varimp(cf2, conditional = TRUE)
Still, I have the error:
Error in x[strata == s] <- .resample(x[strata == s]) :
NAs are not allowed in subscripted assignments
---update
The error occurs when kidids_node function is applied.
The truth is, if I keep all integer type covariate, instead of converting them by as.factor, applying varimp makes no error.
I am trying to solve the digit Recognizer competition in Kaggle and I run in to this error.
I loaded the training data and adjusted the values of it by dividing it with the maximum pixel value which is 255. After that, I am trying to build my model.
Here Goes my code,
Given_Training_data <- get(load("Given_Training_data.RData"))
Given_Testing_data <- get(load("Given_Testing_data.RData"))
Maximum_Pixel_value = max(Given_Training_data)
Tot_Col_Train_data = ncol(Given_Training_data)
training_data_adjusted <- Given_Training_data[, 2:ncol(Given_Training_data)]/Maximum_Pixel_value
testing_data_adjusted <- Given_Testing_data[, 2:ncol(Given_Testing_data)]/Maximum_Pixel_value
label_training_data <- Given_Training_data$label
final_training_data <- cbind(label_training_data, training_data_adjusted)
smp_size <- floor(0.75 * nrow(final_training_data))
set.seed(100)
training_ind <- sample(seq_len(nrow(final_training_data)), size = smp_size)
training_data1 <- final_training_data[training_ind, ]
train_no_label1 <- as.data.frame(training_data1[,-1])
train_label1 <-as.data.frame(training_data1[,1])
svm_model1 <- svm(train_label1,train_no_label1) #This line is throwing an error
Error : Error in predict.svm(ret, xhold, decision.values = TRUE) : Model is empty!
Please Kindly share your thoughts. I am not looking for an answer but rather some idea that guides me in the right direction as I am in a learning phase.
Thanks.
Update to the question :
trainlabel1 <- train_label1[sapply(train_label1, function(x) !is.factor(x) | length(unique(x))>1 )]
trainnolabel1 <- train_no_label1[sapply(train_no_label1, function(x) !is.factor(x) | length(unique(x))>1 )]
svm_model2 <- svm(trainlabel1,trainnolabel1,scale = F)
It didn't help either.
Read the manual (https://cran.r-project.org/web/packages/e1071/e1071.pdf):
svm(x, y = NULL, scale = TRUE, type = NULL, ...)
...
Arguments:
...
x a data matrix, a vector, or a sparse matrix (object of class
Matrix provided by the Matrix package, or of class matrix.csr
provided by the SparseM package,
or of class simple_triplet_matrix provided by the slam package).
y a response vector with one label for each row/component of x.
Can be either a factor (for classification tasks) or a numeric vector
(for regression).
Therefore, the mains problems are that your call to svm is switching the data matrix and the response vector, and that you are passing the response vector as integer, resulting in a regression model. Furthermore, you are also passing the response vector as a single-column data-frame, which is not exactly how you are supposed to do it. Hence, if you change the call to:
svm_model1 <- svm(train_no_label1, as.factor(train_label1[, 1]))
it will work as expected. Note that training will take some minutes to run.
You may also want to remove features that are constant (where the values in the respective column of the training data matrix are all identical) in the training data, since these will not influence the classification.
I don't think you need to scale it manually since svm itself will do it unlike most neural network package.
You can also use the formula version of svm instead of the matrix and vectors which is
svm(result~.,data = your_training_set)
in your case, I guess you want to make sure the result to be used as factor,because you want a label like 1,2,3 not 1.5467 which is a regression
I can debug it if you can share the data:Given_Training_data.RData
I'm using the dataset found here: http://archive.ics.uci.edu/ml/datasets/Qualitative_Bankruptcy
When running code:
library(caret)
bank <- read.csv("Qualitative_Bankruptcy.data.txt", header=FALSE, na.strings = "?",
strip.white = TRUE)
x=bank[1:6]
y=bank[7]
bank.knn <- train(x, y, method= "knn", trControl = trainControl(method = "cv"))
I get the following error:
Error: nrow(x) == n is not TRUE
The only example I've found is Error: nrow(x) == n is not TRUE when using Train in Caret ; my Y is already a factor vector with two classes, all the X features are factors as well. I've tried using as.matrix and as.data.frame on both the X and Y without success.
nrow(x) is equal to 250, but I'm not sure what the n is referring to in the package.
y is not actually a vector, but a data.frame with one column because bank[7] does not convert the 7th column into a vector, so length(y) is 1. Use bank[, 7] instead. It does not make a difference for x but it could as well be generated by bank[, 1:6].
Additionally to make KNN work you probably have to convert the x data.frame that consists of factor variables to numeric dummy variables.
x=model.matrix(~. - 1, bank[, 1:6])
y=bank[, 7]
bank.knn <- train(x, y, method= "knn",
trControl = trainControl(method = "cv"))
I'm not a caret user but I think you have two problems. The extraction method you used did not deliver an atomic vector but rahter a list that contained a vector. If you asked for length(y) you get 1 rather than 250. The first error is easily solved by changing to this definition of y:
y <- bank[[7]] # extract a vector rather than a sublist
Then things get messy. The KNN method expects continuous data (and the error messages you get indicate the caret's author considers it a "regression method" and you are passing factor data, so you therefore need to choose a classification method instead.