Related
I am learning R and want to manually compute the Mann-Whitney U statistic and p-value using a normal approximation (and not use wilcox.test or equivalent). My pensioner's brain struggles with coding so it has taken me hours to produce the same answers as the textbook. However, my code to sum the 'StateRank' for the state with the fewest values is convoluted. How can I replace the commented section with more efficient code? I've hunted high and low, both here and on Google, but I don't even know which search terms to use! It won't surprise me to hear that there is a one-line solution but I'm no nearer knowing what it is.
library(tidyverse)
# Activity 9: aboriginal village size in Alaska and California
a.df <- data.frame(
Alaska = c(23, 26, 30, 33, 42, 45, 45, 50, 50.5, 96, 113, 557, NA),
Calif = c(39, 48, 53.5, 55, 57, 66, 77, 79, 108, 121, 162, 197, 309)
) %>%
pivot_longer(
cols = c("Alaska", "Calif"),
names_to = "State",
values_to = "Value",
values_drop_na = TRUE
) %>%
mutate(StateRank = rank(Value, ties.method = "average"))
# clumsy code to sort, then sum ranks (StateRank) for group with fewest values (nA)
#--------------------------------------------------------------------------------
asc_or_desc <- as.matrix(count(a.df, State))
if (as.numeric(asc_or_desc[1,2])>as.numeric(asc_or_desc[2,2])) {
a.df <- arrange(a.df, desc(State))
} else {
a.df <- arrange(a.df, State)
}
#--------------------------------------------------------------------------------
nA <- as.numeric(min(count(a.df, State, sort = TRUE)$n))
nB <- as.numeric(max(count(a.df, State, sort = TRUE)$n))
a.U <- sum(a.df$StateRank[1:nA])
a.E <- (nA*(nA+nB+1))/2 # Expectation of U
a.V <- (nA*nB*(nA+nB+1))/12 # Variance of U
a.Z <- (a.U - a.E)/sqrt(a.V)
a.P <- round((1 - round(pnorm(round(abs(a.Z), 2),
mean = 0, sd = 1) ,4)) * 2, 3)
# all the rounding is to mimic statistical tables (so that
# the answer is the same as in the textbook that I use)
Please try this code and tell me if I am on the right way:
I replaced your so called clumsy code with this one
... %>%
group_by(State) %>%
mutate(mx = max(Value)) %>%
arrange(desc(mx), desc(Value)) %>%
select(-mx)
The whole code:
library(tidyverse)
# Activity 9: aboriginal village size in Alaska and California
a.df <- data.frame(
Alaska = c(23, 26, 30, 33, 42, 45, 45, 50, 50.5, 96, 113, 557, NA),
Calif = c(39, 48, 53.5, 55, 57, 66, 77, 79, 108, 121, 162, 197, 309)
) %>%
pivot_longer(
cols = c("Alaska", "Calif"),
names_to = "State",
values_to = "Value",
values_drop_na = TRUE
) %>%
mutate(StateRank = rank(Value, ties.method = "average")) %>%
group_by(State) %>%
mutate(mx = max(Value)) %>%
arrange(desc(mx), desc(Value)) %>%
select(-mx)
-----------------------------------------------------------------------------
a.U <- sum(a.df$StateRank[1:nA])
a.E <- (nA*(nA+nB+1))/2 # Expectation of U
a.V <- (nA*nB*(nA+nB+1))/12 # Variance of U
a.Z <- (a.U - a.E)/sqrt(a.V)
a.P <- round((1 - round(pnorm(round(abs(a.Z), 2),
mean = 0, sd = 1) ,4)) * 2, 3)
# all the rounding is to mimic statistical tables (so that
# the answer is the same as in the textbook that I use)
Assume you are given a vector of degrees which represent points on the unit circle. How could you formally check to see what the minimum number of points you could isolate in one semicircle with a diameter? I understand there may be multiple diameters which satisfy this property for a given set of data points. That is okay. I am solely interested in the minimum number of points which can be isolated, not the diameter in particular. It also needs to be computationally efficient, so it works for a large number of points. I have written the following based on #d.b suggestion, but the algorithm fails for tst4.
In R,
# Plots the points on a circle and attempts to find the minimum m (algorithm incorrect for tst )
min_dia <- function(degs, plot = T){
library(dplyr)
plot_circle <- function(x, y, r) {
angles <- seq(0, 2*pi,length.out = 360)
lines(r*cos(angles) + x, r*sin(angles) + y)
}
deg <- degs
plot_boo <- plot
# #d.b suggestion method for finding m
temp <- abs((deg - min(deg) + 180) %% 360 - 180)
m <- min(table(cut(temp, breaks = c(-180, 90, 180))))
if(plot_boo == T){
tm_deg <- c(0, 30, 45, 60, 90, 120, 135, 150, 180, 210, 225, 240, 270, 300, 315, 330)
tm_rad <- (tm_deg * pi) / 180
th <- (deg*pi)/180
r <- 1
x <- r*cos(th)
y <- r*sin(th)
windows.options(width = 600, height = 600)
plot(x, y, xlim = c(-1.1, 1.1), ylim = c(-1.1, 1.1), pch = 20, xlab = "", ylab = "", main = "Plot of Given Data Points by Degrees")
plot_circle(0, 0, 1)
points(0, 0)
text(r*cos(tm_rad), r*sin(tm_rad), labels = paste0(tm_deg), cex= 0.5, pos = 3)
}
return(m)
}
# Function to plot diameter by degrees
plot_dia <- function(deg){
deg1 <- deg
deg2 <- deg + 180
th1 <- (deg1*pi)/180
th2 <- (deg2*pi)/180
x1 <- cos(th1)
y1 <- sin(th1)
x2 <- cos(th2)
y2 <- sin(th2)
lines(c(x1, x2), c(y1, y2))
}
# Testing
tst1 <- c(15, 45, 20) # m = 0
tst2 <- c(15, 45, 200) # m = 1
tst3 <- c(15, 46, 114, 137, 165, 187, 195, 215, 271, 328) # m = 3
tst4 <- c(36, 304, 281, 254, 177, 59, 47, 158, 244, 149, 317, 235, 345, 209, 204,
156, 325, 95, 215, 267)
# Implementation
min_dia(tst1)
plot_dia(90) # eyeball and plot to check
min_dia(tst2)
plot_dia(190) # eyeball and plot to check
min_dia(tst3)
plot_dia(110) # eyeball and plot to check
min_dia(tst4)
plot_dia(150) # m is probably 2
For the three points I have provided in the code with degrees 15, 45, and 225, the minimum number of points (say m) I could separate with a line would be 1.
For the points with degrees 15, 20, 25, the answer would obviously be 0.
Any help or guidance on an efficient algorithm to solve this minimization problem would be greatly appreciated.
Update:
Here is the plot if you were to run through the R code along with an example of a line which illustrates the minimum number of points you could separate, being 1.
Update:
I have also updated the code above which allows one to plot the data points, speculate a diameter which minimizes m, and the plot the diameter by degree.
If points are not sorted, then sort them by angle.
Walk through the list using two-pointer approach. Increment right index if angle difference is <180, increment left index if angle difference is >180. Minimum from (right-left, length-right+left) is your desired value.
Note that scanning should be performed in cyclic manner (you may add copy of the list with +360 addition like this 15, 45, 225, 375, 585)
Here is a brute force method. Just draw a line at all angle (0.5:359.5) and see what angle gives the least value.
bar = function(degs){
CUTS = sapply(0:359 + 0.5, function(D){
temp = ((degs - D + 180) %% 360 - 180)
min(table(cut(temp, breaks = c(-180, 0, 180))))
})
D = (0:359 + 0.5)[which.min(CUTS)]
m = min(CUTS)
plot(0, 0, type = "n",
xlim = c(-1.5, 1.5), ylim = c(-1.5, 1.5),
ann = FALSE, axes = FALSE, asp = 1)
plotrix::draw.circle(0, 0, 1)
degs = degs * pi/180
xs = cos(degs)
ys = sin(degs)
x1 = cos(D * pi/180)
y1 = sin(D * pi/180)
x2 = cos((D * pi/180) + pi)
y2 = sin((D * pi/180) + pi)
lines(c(x1, x2), c(y1, y2))
points(xs, ys, pch = 19)
points(0, 0)
return(c(min_num = m, angle = D))
}
tst4 <- c(36, 304, 281, 254, 177, 59, 47, 158, 244, 149, 317, 235,
345, 209, 204, 156, 325, 95, 215, 267)
bar(degs = tst4)
# min_num angle
# 5.0 145.5
I have a data.frame P1 (5000rows x 4cols) and would like to save the subset of data in columns 2,3 and 4 when the time-stamp in column 1 falls into a set range determined by a vector TimeStamp (in seconds).
E.g. put all values in columns 2, 3, and 4 into a new data.frame and call each section of data: Condition.1.P1, Condition.2.P1, etc.
The reason I'd like to label separately as I have 35 versions of P1 (P2, P3, P33, etc) and need to be able to melt them together to plot them.
dput(TimeStamp)
c(18, 138, 438, 678, 798, 1278, 1578, 1878, 2178)
dput(head(P1))
structure(list(Time = c(0, 5, 100, 200, 500, 1200), SkinTemp = c(27.781,
27.78, 27.779, 27.779, 27.778, 27.777), HeartRate = c(70, 70,
70, 70, 70, 70), RespirationRate = c(10, 10, 10, 10, 10, 10)), .Names = c("Time",
"SkinTemp", "HeartRate", "RespirationRate"), row.names = c(NA,
6L), class = "data.frame")
Do you want to seperate the data by the timestamp range and put it in a list? Than this might be what you are looking for:
TimeStamp <- c(18, 138, 438, 678, 798, 1278, 1578, 1878, 2178)
dat <- structure(list(Time = c(0, 5, 100, 200, 500, 1200), SkinTemp =(27.781,
27.78, 27.779, 27.779, 27.778, 27.777), HeartRate = c(70, 70,
70, 70, 70, 70), RespirationRate = c(10, 10, 10, 10, 10, 10)), .Names = c ("Time",
"SkinTemp", "HeartRate", "RespirationRate"), row.names = c(NA,
6L), class = "data.frame")
dat$Segment <- cut(dat$Time,c(-Inf,TimeStamp))
split(dat,dat$Segment)
P2 = data.frame(NA, NA, NA, NA) # Create empty data.frame
for (i in 1:length(ts)){
P3 = data.frame() # Create empty changing data.frame
if (i == 1) {ts1 = 0} else {ts1 = ts[i-1]} #First time stamp starts at 0
ts2 = ts[i]
P3 = subset(P1, P1$Time > ts1 & P1$Time < ts2)[,c(2,3,4)] #Subset the columns and assign to P3
if (nrow(P3) == 0){P3 = data.frame(NA, NA, NA)} #If the subset is empty, assign NA
P3$TimeStamp = paste(ts1,ts2,sep="-") # Append TimeStamp to the P3
colnames(P3) = colnames(P2) #Make sure column names are same to allow rbind
P2 = rbind(P2,P3) #Append P3 to P2
}
P2 = P2[c(2:nrow(P2)),] #Remove the first row (that has NA)
colnames(P2) = c("SkinTemp", "HeartRate", "RespirationRate", "TimeStamp") #Provide column names)
rm(P3); rm(i); rm(ts1); rm(ts2) #Cleanup
I have a dataframe df
dput(df)
structure(list(x = c(49, 50, 51, 52, 53, 54, 55, 56, 1, 2, 3,
4, 5, 14, 15, 16, 17, 2, 3, 4, 5, 6, 10, 11, 3, 30, 64, 66, 67,
68, 69, 34, 35, 37, 39, 2, 17, 18, 99, 100, 102, 103, 67, 70,
72), y = c(2268.14043972082, 2147.62290922552, 2269.1387550775,
2247.31983098201, 1903.39138268307, 2174.78291538358, 2359.51909126411,
2488.39004804939, 212.851575751527, 461.398994384333, 567.150629704352,
781.775113821961, 918.303706148872, 1107.37695799186, 1160.80594193377,
1412.61328924168, 1689.48879626486, 260.737164468854, 306.72700499362,
283.410379620422, 366.813913489692, 387.570173754128, 388.602676983443,
477.858510450125, 128.198042456082, 535.519377609133, 1028.8780498564,
1098.54431357711, 1265.26965941035, 1129.58344809909, 820.922447928053,
749.343583476846, 779.678206156474, 646.575242339517, 733.953282899613,
461.156280127354, 906.813018662913, 798.186995701282, 831.365377249207,
764.519073183124, 672.076289062505, 669.879217186302, 1341.47673353751,
1401.44881976186, 1640.27575962036)), .Names = c("x", "y"), row.names = c(NA,
-45L), class = "data.frame")
I have created two non-linear regression (nls1 and nls2) based on my dataset.
library(stats)
nls1 <- nls(y~A*(x^B)*(exp(k*x)),
data = df,
start = list(A = 1000, B = 0.170, k = -0.00295))
nls2<-nls(y~A*x^3+B*x^2+C*x+D, data=df,
start = list(A=0.02, B=-0.6, C= 50, D=200))
I then computed bootstrap objects for these two functions to get multiple sets of parameters (A,B and k for nls1 and A, B, C and D for nls2).
library(nlstools)
Boo1 <- nlsBoot(nls1, niter = 200)
Boo2 <- nlsBoot(nls2, niter = 200)
Based on this bootstrap objects, I would like to compute r-squared of each combination of parameters and save the min, max and median of my r-squared values for each bootstrap object into one new dataframe. The dataframe could look like new.df.
structure(list(Median = c(NA, NA), Max = c(NA, NA), Min = c(NA,
NA)), .Names = c("Median", "Max", "Min"), row.names = c("nls1",
"nls2"), class = "data.frame")
The idea is then to do some box plots with the median, min and max values for each non-linear model based on bootstrapping to compare them. Can someone help me out with that? Thanks in advance.
Answer from #bunk
stat <- function(dat, inds) { fit <- try(nls(y~A*(x^B)*(exp(k*x)), data = dat[inds,], start = list(A = 1000, B = 0.170, k = -0.00295)), silent=TRUE); f1 <- if (inherits(fit, "nls")) AIC(fit) else NA; fit2 <- try(nls(y~A*x^3+B*x^2+C*x+D, data = dat[inds,], start = list(A=0.02, B=-0.6, C= 50, D=200)), silent=TRUE); f2 <- if (inherits(fit2, "nls")) AIC(fit2) else NA; c(f1, f2) }; res <- boot(df, stat, R=200). Then, to get medians for example, apply(res$t, 2, median, na.rm=TRUE)
I got this problem I still haven't found out how to solve it. I want to plot all the Values MW1, MW2 and MW3 in function of "DHT + Procymidone". How can I plot all this values in the graphic so that I will get 3 different curves (in different colors and different number like curve 1, 2, ...)? And I want the labels of the X-Values("DHT + Procymidone") to be like -10, -9, ... , -4 instead of 1,00E-10, ...
DHT + Procymidone MW 1 MW 2 MW 3
1,00E-10 114,259526780335 111,022461066274 213,212408408682
1,00E-09 115,024187788314 111,083316791613 114,529425136628
1,00E-08 110,517449986348 107,867941606743 125,10230718665
1,00E-07 100,961311263444 98,4219995773135 116,045168653416
1,00E-06 71,2383604211297 73,539659636842 50,3213799775309
1,00E-05 20,3553333652104 36,1345771905088 15,42260866106
1,00E-04 4,06189509055904 18,1246447874679 10,1988107887318
I have shortened your data frame for convenience reasons, so here's an example:
mydat <- data.frame(DHT_Procymidone = c(-10, -9, -8, -7, -6, -5, -4),
MW1 = c(114, 115, 110, 100, 72, 20, 4),
MW2 = c(111, 111, 107, 98, 73, 36, 18),
MW3 = c(213, 114, 123, 116, 50, 15, 10))
library(tidyr)
library(ggplot2)
mydf <- gather(mydat, "grp", "MW", 2:4)
ggplot(mydf, aes(x = DHT_Procymidone, y = MW, colour = grp)) + geom_line()
which gives following plot:
To use ggplot, your data needs to be in long-format. gather does this for you, appending columns MW1-MW3 into one column, while the column names are added as new column values in the grp-column. This group-column allows to identify different groups, i.e. different colored lines in the plot.
Depending on the type of DHT + Procymidone, you can, e.g. use format(..., scientific = FALSE) to convert to numeric, however, this will result in -0.0000000001 (and not -10).
However, if this data column is a character vector (you can coerce with as.character), this may work:
a <- "1,00E-10"
sub("1,00E", "", a, fixed = TRUE)
> [1] "-10"
As an alternative answer to #Daniel's which doesn't rely on ggplot (thanks Daniel for providing the reproducible data).
mydat <- data.frame(DHT_Procymidone = c(-10, -9, -8, -7, -6, -5, -4),
MW1 = c(114, 115, 110, 100, 72, 20, 4),
MW2 = c(111, 111, 107, 98, 73, 36, 18),
MW3 = c(213, 114, 123, 116, 50, 15, 10))
plot(mydat[,2] ~ mydat[,1], typ = "l", ylim = c(0,220), xlim = c(-10,-2), xlab = "DHT Procymidone", ylab = "MW")
lines(mydat[,3] ~ mydat[,1], col = "blue")
lines(mydat[,4] ~ mydat[,1], col = "red")
legend(x = -4, y = 200, legend = c("MW1","MW2","MW3"), lty = 1, bty = "n", col = c("black","blue","red"))
To change axis labels see the text in xlab and ylab. To change axis limits see xlim and ylim.