I'm experiencing some trouble when using the polr function.
Here is a subset of the data I have:
# response variable
rep = factor(c(0.00, 0.04, 0.06, 0.13, 0.15, 0.05, 0.07, 0.00, 0.06, 0.04, 0.05, 0.00, 0.92, 0.95, 0.95, 1, 0.97, 0.06, 0.06, 0.03, 0.03, 0.08, 0.07, 0.04, 0.08, 0.03, 0.07, 0.05, 0.05, 0.06, 0.04, 0.04, 0.08, 0.04, 0.04, 0.04, 0.97, 0.03, 0.04, 0.02, 0.04, 0.01, 0.06, 0.06, 0.07, 0.08, 0.05, 0.03, 0.06,0.03))
# "rep" is discrete variable which represents proportion so that it varies between 0 and 1
# It is discrete proportions because it is the proportion of TRUE over a finite list of TRUE/FALSE. example: if the list has 3 arguments, the proportions value can only be 0,1/3,2/3 or 1
# predicted variable
set.seed(10)
pred.1 = sample(x=rep(1:5,10),size=50)
pred.2 = sample(x=rep(c('a','b','c','d','e'),10),size=50)
# "pred" are discrete variables
# polr
polr(rep~pred.1+pred.2)
The subset I gave you works fine ! But my entire data set and some subset of it does not work ! And I can't find anything in my data that differ from this subset except the quantity. So, here is my question: Is there any limitations in terms of the number of levels for example that would yield to the following error message:
Error in optim(s0, fmin, gmin, method = "BFGS", ...) :
the initial value in 'vmin' is not finite
and the notification message:
glm.fit: fitted probabilities numerically 0 or 1 occurred
(I had to translate these two messages into english so they might no be 100% correct)
I sometimes only get the notification message and sometimes everything is fine depending on the what subset of my data I use.
My rep variable have a total of 101 levels for information (and contain nothing else than the kind of data I described)
So it is a terrible question that I am asking becaue I can't give you my full dataset and I don't know where is the problem. Can you guess where my problem comes from thanks to these informations ?
Thank you
Following #joran's advice that your problem is probably the 100-level factor, I'm going to recommend something that probably isn't statistically valid but will probably still be effective in your particular situation: don't use logistic regression at all. Just drop it. Perform a simple linear regression and then discretize your output as necessary using a specialized rounding procedure. Give it a shot and see how well it works for you.
rep.v = c(0.00, 0.04, 0.06, 0.13, 0.15, 0.05, 0.07, 0.00, 0.06, 0.04, 0.05, 0.00, 0.92, 0.95, 0.95, 1, 0.97, 0.06, 0.06, 0.03, 0.03, 0.08, 0.07, 0.04, 0.08, 0.03, 0.07, 0.05, 0.05, 0.06, 0.04, 0.04, 0.08, 0.04, 0.04, 0.04, 0.97, 0.03, 0.04, 0.02, 0.04, 0.01, 0.06, 0.06, 0.07, 0.08, 0.05, 0.03, 0.06,0.03)
set.seed(10)
pred.1 = factor(sample(x=rep(1:5,10),size=50))
pred.2 = factor(sample(x=rep(c('a','b','c','d','e'),10),size=50))
model = lm(rep.v~as.factor(pred.1) + as.factor(pred.2))
output = predict(model, newx=data.frame(pred.1, pred.2))
# Here's one way you could accomplish the discretization/rounding
f.levels = unique(rep.v)
rounded = sapply(output, function(x){
d = abs(f.levels-x)
f.levels[d==min(d)]
}
)
>rounded
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
0.06 0.07 0.00 0.06 0.15 0.00 0.07 0.00 0.13 0.06 0.06 0.15 0.15 0.92 0.15 0.92 0.15 0.15 0.06 0.06 0.00 0.07 0.15 0.15
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
0.15 0.15 0.00 0.00 0.15 0.00 0.15 0.15 0.07 0.15 0.00 0.07 0.15 0.00 0.15 0.15 0.00 0.15 0.15 0.15 0.92 0.15 0.15 0.00
49 50
0.13 0.15
orm from rms can handle ordered outcomes with a large number of categories.
library(rms)
orm(rep ~ pred.1 + pred.2)
Related
EDITED:
I have a dataframe that stores information about when particular assessment happened ('when'). This assessment happened at different times (t1 - t3) which vary by participant.
The dataframe also contains all the assessments ever completed by every participant (including the one referenced in the 'when' column). I only want the assessment information represented in the 'when' column. So if the number is 1, I want to keep all the data related to that assessment and remove all the data that was not collected at that assessment. Please note that I have many more variables in my actual data set than are represented in this shortened data set so any solution should not rely on repeating variable names.
Here's the best I can do. The problem with this solution is that it would have to be repeated for every variable name.
df2 <- mutate(.data = df,
a1G_when = if_else(when == 1, a1G_t1, NA_real_))
# here is what we start with
df <- structure(list(id = 1:10, when = c(1, 3, 2, 1, 2, 1, 3, 2, 3,
1), a1G_t1 = c(0.78, 0.21, 0.04, 0.87, 0.08, 0.25, 0.9, 0.77,
0.51, 0.5), Stqo_t1 = c(0.68, 0.77, 0.09, 0.66, 0.94, 0.05, 0.97,
0.92, 1, 0.04), Twcdz_t1 = c(0.95, 0.41, 0.29, 0.54, 0.06, 0.45,
0.6, 0.24, 0.17, 0.55), Kgh_t1 = c(0.25, 0.86, 0.37, 0.34, 0.97,
0.75, 0.73, 0.68, 0.37, 0.66), `2xWX_t1` = c(0.47, 0.52, 0.23,
0.5, 0.88, 0.71, 0.21, 0.98, 0.76, 0.21), `2IYnS_t1` = c(0.32,
0.75, 0.03, 0.46, 0.89, 0.71, 0.51, 0.83, 0.34, 0.32), a1G_t2 = c(0.97,
0.01, 0.58, 0.33, 0.58, 0.37, 0.76, 0.33, 0.39, 0.56), Stqo_t2 = c(0.78,
0.42, 0.5, 0.69, 0.09, 0.72, 0.84, 0.94, 0.46, 0.83), Twcdz_t2 = c(0.62,
0.34, 0.72, 0.62, 0.8, 0.26, 0.3, 0.88, 0.42, 0.53), Kgh_t2 = c(0.99,
0.66, 0.02, 0.17, 0.51, 0.03, 0.03, 0.74, 0.1, 0.26), `2xWX_t2` = c(0.68,
0.97, 0.56, 0.27, 0.66, 0.71, 0.96, 0.24, 0.37, 0.76), `2IYnS_t2` = c(0.24,
0.88, 0.58, 0.31, 0.8, 0.92, 0.91, 0.9, 0.55, 0.52), a1G_t3 = c(0.73,
0.6, 0.66, 0.06, 0.33, 0.34, 0.09, 0.44, 0.73, 0.56), Stqo_t3 = c(0.28,
0.88, 0.56, 0.75, 0.85, 0.33, 0.88, 0.4, 0.63, 0.61), Twcdz_t3 = c(0.79,
0.95, 0.41, 0.07, 0.99, 0.06, 0.74, 0.17, 0.89, 0.4), Kgh_t3 = c(0.06,
0.52, 0.35, 0.91, 0.43, 0.74, 0.72, 0.96, 0.39, 0.4), `2xWX_t3` = c(0.25,
0.09, 0.64, 0.32, 0.15, 0.14, 0.18, 0.33, 0.97, 0.6), `2IYnS_t3` = c(0.92,
0.49, 0.09, 0.95, 0.3, 0.83, 0.82, 0.56, 0.29, 0.36)), row.names = c(NA,
-10L), class = "data.frame")
# here is an example of what I want with the first column. I would also want all other repeating columns to look like this (Stq0_when, Twcdz, etc.)
id when a1G_when
1 1 1 0.78
2 2 3 0.88
3 3 2 0.58
4 4 1 0.87
5 5 2 0.58
6 6 1 0.25
7 7 3 0.09
8 8 2 0.33
9 9 3 0.73
10 10 1 0.50
Using data.table, you could do something like:
library(data.table)
cols <- unique(paste0(gsub("_.*", "", setdiff(names(df), c("id", "when"))), "_when"))
setDT(df)[
, (cols) := lapply(cols, function(x) paste0(gsub("_.*", "", x), "_t", when))][
, (cols) := lapply(cols, function(x) as.character(.SD[[get(x)]])), by = cols][
, (cols) := lapply(.SD, as.numeric), .SDcols = cols
]
Output (only first 10 rows and only relevant when columns):
a1G_when Stqo_when Twcdz_when Kgh_when 2xWX_when 2IYnS_when
1: 0.78 0.68 0.95 0.25 0.47 0.32
2: 0.60 0.88 0.95 0.52 0.09 0.49
3: 0.58 0.50 0.72 0.02 0.56 0.58
4: 0.87 0.66 0.54 0.34 0.50 0.46
5: 0.58 0.09 0.80 0.51 0.66 0.80
6: 0.25 0.05 0.45 0.75 0.71 0.71
7: 0.09 0.88 0.74 0.72 0.18 0.82
8: 0.33 0.94 0.88 0.74 0.24 0.90
9: 0.73 0.63 0.89 0.39 0.97 0.29
10: 0.50 0.04 0.55 0.66 0.21 0.32
Here is an opportunity to use the new tidyr::pivot_longer. We can use this to reshape the data so that var and t are in their own columns, filter to just the rows with the data we want (i.e. where t equals when) and then pivot the data back out to wide.
library(tidyverse)
df1 <- structure(list(ID = c(101, 102, 103, 104, 105), when = c(1, 2, 3, 1, 2), var1_t1 = c(5, 6, 4, 5, 6), var2_t1 = c(2, 3, 4, 2, 3), var1_t2 = c(7, 8, 9, 7, 8), var2_t2 = c(5, 4, 5, 4, 5), var1_t3 = c(3, 4, 3, 4, 3), var2_t3 = c(6, 7, 6, 7, 6)), row.names = c(NA, 5L), class = "data.frame")
df1 %>%
pivot_longer(
cols = starts_with("var"),
names_to = c("var", "t"),
names_sep = "_t",
values_to = "val",
col_ptypes = list(var = character(), t = numeric())
) %>%
filter(when == t) %>%
select(-t) %>%
pivot_wider(names_from = "var", values_from = "val")
#> # A tibble: 5 x 4
#> ID when var1 var2
#> <dbl> <dbl> <dbl> <dbl>
#> 1 101 1 5 2
#> 2 102 2 8 4
#> 3 103 3 3 6
#> 4 104 1 5 2
#> 5 105 2 8 5
Created on 2019-07-16 by the reprex package (v0.3.0)
I need to read the following matrix from a file. It's a symmetric correlation matrix, so half of it is omitted.
1.00
0.49 1.00
0.53 0.57 1.00
0.49 0.46 0.48 1.00
0.51 0.53 0.57 0.57 1.00
0.33 0.30 0.31 0.24 0.38 1.00
0.32 0.21 0.23 0.22 0.32 0.43 1.00
0.20 0.16 0.14 0.12 0.17 0.27 0.33 1.00
0.19 0.08 0.07 0.19 0.23 0.24 0.26 0.25 1.00
0.30 0.27 0.24 0.21 0.32 0.34 0.54 0.46 0.28 1.00
0.37 0.35 0.37 0.29 0.36 0.37 0.32 0.29 0.30 0.35 1.00
0.21 0.20 0.18 0.16 0.27 0.40 0.58 0.45 0.27 0.59 0.31 1.00
Currently, I'm using
data1 <- na.omit(as.vector(t(read.table('triangle-data.txt', fill = TRUE))))
pt <- 12
R <- matrix(0, nrow = pt , ncol = pt)
for(i in 1:pt){
R[i, 1:i] <- data1[(i*(i-1)/2 + 1): (i*(i+1)/2)]
}
R <- R + t(R) - diag(rep(1, pt))
R
The result is
> dput(R)
structure(c(1, 0.49, 0.53, 0.49, 0.51, 0.33, 0.32, 0.2, 0.19,
0.3, 0.37, 0.21, 0.49, 1, 0.57, 0.46, 0.53, 0.3, 0.21, 0.16,
0.08, 0.27, 0.35, 0.2, 0.53, 0.57, 1, 0.48, 0.57, 0.31, 0.23,
0.14, 0.07, 0.24, 0.37, 0.18, 0.49, 0.46, 0.48, 1, 0.57, 0.24,
0.22, 0.12, 0.19, 0.21, 0.29, 0.16, 0.51, 0.53, 0.57, 0.57, 1,
0.38, 0.32, 0.17, 0.23, 0.32, 0.36, 0.27, 0.33, 0.3, 0.31, 0.24,
0.38, 1, 0.43, 0.27, 0.24, 0.34, 0.37, 0.4, 0.32, 0.21, 0.23,
0.22, 0.32, 0.43, 1, 0.33, 0.26, 0.54, 0.32, 0.58, 0.2, 0.16,
0.14, 0.12, 0.17, 0.27, 0.33, 1, 0.25, 0.46, 0.29, 0.45, 0.19,
0.08, 0.07, 0.19, 0.23, 0.24, 0.26, 0.25, 1, 0.28, 0.3, 0.27,
0.3, 0.27, 0.24, 0.21, 0.32, 0.34, 0.54, 0.46, 0.28, 1, 0.35,
0.59, 0.37, 0.35, 0.37, 0.29, 0.36, 0.37, 0.32, 0.29, 0.3, 0.35,
1, 0.31, 0.21, 0.2, 0.18, 0.16, 0.27, 0.4, 0.58, 0.45, 0.27,
0.59, 0.31, 1), .Dim = c(12L, 12L))
This is too unwieldy, and I need to hard-code its size. Is there a more convenient way?
I used a combination of readLines and strsplit to read the file
a <- sapply(sapply(lapply(readLines("triangle.txt"),
function(x) strsplit(x, " ")), "[", 1),
function(x) na.omit(as.numeric(x)))
and rbind to cast it into a square matrix
A <- do.call("rbind", a)
Despite the warning, the lower part of the matrix is correctly read from the file, but the upper part is all messed up, which I fixed with a little dirty trick
A[upper.tri(A)] <- 0
A <- A + t(A) - diag(nrow(A))
EDIT
Another simpler solution based on the vector of the coefficients:
data1 <- na.omit(as.vector(t(read.table('triangle.txt', fill = TRUE))))
n <- Re(polyroot(c(-length(data1), 1/2, 1/2)))[1]
A <- matrix(0, n, n)
A[upper.tri(A, diag = T)] <- data1
A <- A + t(A) - diag(n)
I want to combine numbers from two and two columns within a data frame (values in the columns are the upper and lower values for confidence intervals in statistical analysis).
My perferred method would be to use tidyr and the unite function. But take 0.20 as an example, that number will be modified to 0.2, i.e. these last decimals in numbers are dropped if they are equal to zero. Is there any way to keep the original format when using unite?
unite is describe here: https://www.rdocumentation.org/packages/tidyr/versions/0.8.2/topics/unite
Example:
# Dataframe
df <- structure(list(est = c(0.05, -0.16, -0.02, 0, -0.11, 0.15, -0.26,
-0.23), low2.5 = c(0.01, -0.2, -0.05, -0.03, -0.2, 0.1, -0.3,
-0.28), up2.5 = c(0.09, -0.12, 0, 0.04, -0.01, 0.2, -0.22, -0.17
)), row.names = c(NA, 8L), class = "data.frame")
Combining (uniting) columns for confidence with unite, using a comma as a separator
library(tidyr)
df <- unite(df, "CI", c("low2.5", "up2.5"), sep = ", ", remove=T)
gives
df
est CI
1 0.05 0.01, 0.09
2 -0.16 -0.2, -0.12
3 -0.02 -0.05, 0
4 0.00 -0.03, 0.04
5 -0.11 -0.2, -0.01
6 0.15 0.1, 0.2
7 -0.26 -0.3, -0.22
8 -0.23 -0.28, -0.17
I would want this:
est CI
1 0.05 0.01, 0.09
2 -0.16 -0.20, -0.12
3 -0.02 -0.05, 0.00
4 0.00 -0.03, 0.04
5 -0.11 -0.20, -0.01
6 0.15 0.10, 0.20
7 -0.26 -0.30, -0.22
8 -0.23 -0.28, -0.17
I believing doing this with Base R will be complicated (having to move/rearrange the many combined columns and delete the old columns). Is there any way to avoid unite from dropping decimals with the value of zero?
This works:
library(tidyverse)
df %>%
mutate_if(is.numeric, ~format(., nsmall = 2)) %>%
unite("CI", c("low2.5", "up2.5"), sep = ", ", remove=T)
# est CI
#1 0.05 0.01, 0.09
#2 -0.16 -0.20, -0.12
#3 -0.02 -0.05, 0.00
#4 0.00 -0.03, 0.04
#5 -0.11 -0.20, -0.01
#6 0.15 0.10, 0.20
#7 -0.26 -0.30, -0.22
#8 -0.23 -0.28, -0.17
Clusters have been formed. Now, I am wondering if we can select elements from a particular cluster id.
Here are the different clusters that are formed .
1 2 3 4 5 6 7 8 9
549 290 1206 103 97 102 2 208 123
10 11 12 13 14 15 16 17 18
17 75 293 981 23 586 25 15 365
Like , I have to chose element from cluster 12. Then, how to do it
This is the code used to form the cluster:
db <- dbscan(cbind(Final$event_begin_longitude,Final$event_begin_latitude), .0025, minPts = 1, scale = FALSE, method = "raw")
There is no predefined method to access elements of a cluster. However, you can easily do it yourself. The return value of dbscan has a slot named clusters, which is in the same order as your input:
dta <- structure(list(V1 = c(0, 0.04, 0.09, 0.13, 0.17, 0.22, 0.26, 0.3, 0.35, 0.39, 0.43, 0.48, 0.52, 0.57, 0.61, 0.65, 0.7, 0.74, 0.78, 0.83, 0.87, 0.91, 0.96, 1),
V2 = c(0.01, 0.01, 0, 0, 0.08, 0.03, 0.01, 0.05, 0.45, 0.73, 0.91, 0.9, 0.67, 0.77, 0.98, 0.94, 0.86, 1, 0.38, 0.09, 0.01, 0.01, 0, 0)),
.Names = c("V1", "V2"),
row.names = c(NA, -24L),
class = "data.frame")
db <- dbscan::dbscan(dta, .25, minPts = 1)
# Combine values and their cluster
cbind(dta, db$cluster)
# Plot with colored clusters
plot(dta, col = db$cluster, pch = 16)
I would like to calculate the mean value for each of my variables, and then I would like to create a list of the names of variables with the 3 largest mean values.
I will then use this list to subset my dataframe and will only include the 3 selected variables in additional analysis.
I'm close, but can't quite seem to write the code efficiently. And I'm trying to use pipes for the first time.
Here is a simplified dataset.
FA1 <- c(0.68, 0.79, 0.65, 0.72, 0.79, 0.78, 0.77, 0.67, 0.77, 0.7)
FA2 <- c(0.08, 0.12, 0.07, 0.13, 0.09, 0.12, 0.13, 0.08, 0.17, 0.09)
FA3 <- c(0.1, 0.06, 0.08, 0.09, 0.06, 0.08, 0.09, 0.09, 0.06, 0.08)
FA4 <- c(0.17, 0.11, 0.19, 0.13, 0.14, 0.14, 0.13, 0.16, 0.11, 0.16)
FA5 <- c(2.83, 0.9, 3.87, 1.55, 1.91, 1.46, 1.68, 2.5, 3.0, 1.45)
df <- data.frame(FA1, FA2, FA3, FA4, FA5)
And here is the piece of code I've written that doesn't quite get me what I want.
colMeans(df) %>% rank()
First identify the three columns with the highest means. I use colMeans to calculate the column means. I then sort the means by decreasing order and only keep the first three, which are the three largest.
three <-sort(colMeans(df),decreasing = TRUE)[1:3]
Then, keep only those columns.
df[,names(three)]
> df[,names(three)]
FA5 FA1 FA4
1 2.83 0.68 0.17
2 0.90 0.79 0.11
3 3.87 0.65 0.19
4 1.55 0.72 0.13
5 1.91 0.79 0.14
6 1.46 0.78 0.14
7 1.68 0.77 0.13
8 2.50 0.67 0.16
9 3.00 0.77 0.11
10 1.45 0.70 0.16